Skip to content

2 ways to remove duplicate lines from Linux files

There are many ways to remove duplicate lines from a text file on Linux, but here are two that involve the awk and uniq commands and that offer slightly different results.

Remove duplicate lines with awk

The first command we’ll examine in this post is a very unusual awk command that systematically removes every line in the file that is encountered more than once. It leaves the first instance of the line intact, but “remembers” it and removes any duplicates encountered afterwards.

Here’s an example. Initially, the file looks like this:

To read this article in full, please click here

Source:: Network World – Data Center