I’m wondering about this as well, because every combination I am trying, the answer is still wrong with the output. I’ve tried netstat -luntp | grep “LISTEN” | wc -l , nmap localhost -p 1-65535 | wc -l, ss -l -4 | grep “LISTEN” | wc -l, but all the output that is returned is still apparently the wrong answer.
I find the Question to be a bit misleading given it mentions (Not on localhost and IPv4 only)
to get the correct answer you have to exclude grep -v “127.0.0”
ah! what was confusing me was the statement “Not on localhost and IPv4 only”. I was interpreting this as "Not only limited to localhost and IPv4 " so i was thinking… -not only limited to localhost (as in include EVERYTHING - including localhost) -not only limited to IPv4 (as in include IPv4 and IPv6) I was sort of trying to get to opposite to the required answer. At least got the LISTENING part correct
If anyone has any issues try this, as you want what is listening on ipv4 only (0.0.0.0) which forces the daemon to listen on ipv4 only.
netstat -luntp | grep “0.0.0.0” | grep -v “127.0.0” | grep “LISTEN” | wc -l
netstat -ln4 - services that are listening, with numeric addresses, and using the ipv4 protocol as opposed to ipv6 or unspecified grep LISTEN - find results containing the word “LISTEN” grep -v 127 - exclude any results that contain the number “127” wc -l - count the number of lines
I am not sure why <grep “LISTEN”> is needed since the filter -l should return listening only. If not used, then UDP services will list as well. Any clues why?
curl -s retrieves the HTML content of the webpage https://www.inlanefreight.com in a silent mode without showing progress meter or error messages.
grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" searches for all the links within the HTML content of the webpage by matching the regular expression (http|https)://[a-zA-Z0-9./?=_-]*. This regular expression matches any string that starts with http:// or https:// followed by any combination of letters, digits, slashes, question marks, underscores, and dashes.
sort -u sorts and removes any duplicate links from the output.
wc -l counts the number of lines in the output, which represents the total number of unique links found on the webpage.
tr - replaces the dots with spaces column separates everything into columns grep then we filter LISTEN grep -v filters out 127 wc gives you your answerr