curl Analysis
Read this to learn about how making a web request works: Mozilla: How the Web Works
Too bad all the web traffic is encrypted these days with HTTPs, that's super boring. Guess we'll have to go and find a HTTP website to look at for now, but for now, instead of using a web browser, we'll use a command line tool.
There are plenty of tools to download a file from the terminal, but I prefer to teach curl as it is able to support the most protocols (and can upload too if you need it).
Steps:
Begin capturing packets in Wireshark on the correct external interfaceRun the command ``$ curl http://httpforever.com/Run the command: ``$ wget http://httpforever.com/Stop your capture.
Assignment:
Can you find a DNS request to httpforever.com? What is the filter needed to find it?What is the port you hit the site on?What is the default HTTP port?What happened when the connection started?- What filter is needed to see all traffic between the site and you?
- What did you see if you follow TCP stream?
- Were you able to see anything? If you did, can you extract the file(s)?
- What is the difference between the UserAgents in the two requests? Why?
- What happened when the connection ended?
- Was there anything else you found interesting?
Submit a text file?