The hint was :

I completely randomly tried to check the robots.txt file…

URL=http://natas3.natas.labs.overthewire.org

curl --user natas3:$(cat natas3) $URL"/robots.txt"

There I found /s3cr3t/ which then led me to the /s3cr3t/users.txt file with the password for natas4. In the hindsight I get it “Not even Google” web crawlers will find it. Robots.txt is a informational file for web crawlers to see what they should index or not…

curl --user natas3:$(cat natas3 ) $URL"/s3cr3t/users.txt" | 
grep "natas4" | sed "s/.*://" > natas4