The magic of unix
Today i had to make a java program work, and it was spewing out a lot of errors, most of which can be explained by external causes (trying to connect to servers which are currently down). So i have a log file, containing traces, exception stacks etc… : a lot of stuff is repeated . So how can i weed out the uninteresting ?
well first , remove the duplicates.
option a : write a perl script to do that.
option b : ask a friend if there is a simpler way .
option b tends to work quite often in my office: it turns out there of course is a unix/linux/cygwin command to do just that : uniq!
But the manual says that it works only on sequential lines, and I have dates in that log file. how do i cut out the date part ? well, option b to the rescue : just use cut!
So after cut
you have lines which can be uniq
'd , if only they were sorted. Well it turns out there is a sort command ! who would have thought ;).
so: first you cut
then you sort
, then you uniq
passing each other through … pipes of course :).
cut -f 2 -d , test.txt | cut -b 4- | sort | uniq
cut -f 2 text -d ] test.txt | sort | uniq
test that it is possible to add some text