I would use a simple perl script that does the work for you:
- you pass in the number of files you want and a list of files to process
- it computes the line counts and stores the results in an associative array
- it sorts the array by value and prints the number of files
There's no error-checking in the script (e.g. that "N" is sane; there's a non-empty list of files; an "N" not greater than the number of files; that the files are regular files and not directories, sockets, etc).
#!/usr/bin/perl
# prints the top N given files by line count
my $n = shift;
my %counts = ();
foreach my $file (@ARGV) {
chomp($counts{$file} = `wc -l < "$file"`);
}
foreach my $file (sort { $counts{$b} <=> $counts{$a} } keys %counts) {
print "$file\n";
last unless --$n;
}
This direction is easier than trying to sort values in shell arrays or than relying on filenames to omit certain characters. The perl script's output is potentially ambiguous if your filenames contain newlines; if you were going to do further work with the files, I would do that work inside the perl script.
cut -c7-
on the end ?zsh
:print -rC1 -- /etc/*.conf(.NOe:'REPLY=$(wc -l <$REPLY)':[1,5])
(for the five longest files).