Q. Use cURL from your Pwnbox (not the target machine) to obtain the source code of the “https://www.inlanefreight.com” website and filter all unique paths of that domain. Submit the number of these paths as the answer.
steps 1
curl https://put given link > test.txt
step 2
cat test.txt | tr " " “\n” | cut -d “'” -f2 | cut -d ‘"’ -f2 | grep “put given link” > data.txt
step 3
open data.txt file in subl text and delete duplicates
Now click “Edit” > “Sort Lines” to sort the lines by value.
Now click “Edit” > “Permute Lines” > “Unique” to remove duplicate values.
Save the file
step 4
cat date.txt | wc -l
Q. Use cURL from your Pwnbox (not the target machine) to obtain the source code of the “https://www.inlanefreight.com” website and filter all unique paths of that domain. Submit the number of these paths as the answer.
steps 1
curl https://put given link > test.txt
step 2
cat test.txt | tr " " “\n” | cut -d “'” -f2 | cut -d ‘"’ -f2 | grep “put given link” > data.txt
step 3
open data.txt file in subl text and delete duplicates
Now click “Edit” > “Sort Lines” to sort the lines by value.
Now click “Edit” > “Permute Lines” > “Unique” to remove duplicate values.
Save the file
step 4
cat date.txt | wc -l
Hi all. I can see this topic had been covered, but if you fancy an alternative approach, I used python script I wrote on the output of curl/wget. The script filters out all lines with target website and then cleans up the lines. Not very terminal based, and quick and dirty, but worked nonetheless.
dicLine = set()
file = "./inlanefreight.txt"
with open(file, "r") as inl:
for line in inl:
lineArr = line.split(" ")
for item in lineArr:
if 'https://www.inlanefreight.com/' in item:
if "'https:" in item:
i = item.split("'")
else:
i = item.split('"')
dicLine.add(i[1])
for link in dicLine:
print(link)
print(len(dicLine))
Amazing solution zikyfranky. Thank you. Could you please explain me how does the thinking process looks like when you’re making orders like this? How do you determine the sequence?
Mine looks like this (and please tell me why am I wrong):
curl the domain, then put it in the text - now I have a bunch of text
grep that domain so I can see where they are - okay I saw them with different variations
I tried to find something in common (as many others in this group)
the rest after just pure suffer
So basically I’m thinking kinda linear. I type a command and try to find out how it affects my results. But after I type the (tr " " “\n”) command (which I would never even consider to do and I have no idea why) I can only see a lot of gap in the text which would make me think: that’s not good.
And why is it “-f2” and not “-f1” after the quotes? The link comes right after the quote. Is the quote itself the “-f1”?
I hope you can understand the logic. After that maybe I could use the following cut commands… But without the “tr” command the whole process going sideways.
If you could give me any advice that would be great. Thank you.