Using curl and wget commands to download pages from web sites

One of the most versatile tools for collecting data from a server is curl. The “url” portion of the name properly suggests that the command is built to locate data through the URL (uniform resource locater) that you provide. And it doesn’t just communicate with web servers. It supports a wide variety of protocols. This includes HTTP, HTTPS, FTP, FTPS, SCP, SFTP and more. The wget command, though similar in some ways to curl, primarily supports HTTP and FTP protocols.

Using the curl command

You might use the curl command to:

Download files from the internet
Run tests to ensure that the remote server is doing what is expected
Do some debugging on various problems
Log errors for later analysis
Back up important files from the server

Probably the most obvious thing to do with the curl command is to download a page from a web site for review on the command line. To do this, just enter “curl” followed by the URL of the web site like this (the content below is truncated):

To read this article in full, please click here

Source:: Network World – Linux