- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.
“It’s less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn’t appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.
What a great name!
This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.
Did you read the article? (There is a link to a non walled version.)
Since they made and deployed a proof-of-concept, Aaron B said their pages have been hit millions of times by internet-scraping bots. On a Hacker News thread, someone claiming to be an AI company CEO said a tarpit like this is easy to avoid; Aaron B told 404 Media “If that’s, true, I’ve several million lines of access log that says even Google Almighty didn’t graduate” to avoiding the trap.
If it is linked to the Internet then it’ll be hit by crawlers. Their “trap” isn’t any how many show up but how long each bot stays on their individual site.
I think this rate limiting mechanism is mostly a niceness rule : you should try to not put too much pressure on any website and obey the rules defined in its robots.txt.
So I guess this idea is not bad as it would mostly penalize bad players.
Then that’s a where we hide the good stuff
Reminds me of burying folders in folders in folders to hide naughty content as a youth.
When I worked as a technician in a computer repair company, it was amazing the number of people that were just put that stuff on the desktop.
Totally brilliant and foolproof. Humans can’t open folders
Like what?
Pron
Like stuff that is not bad.
Rule out the mediocre too, unless it’s extremely mediocre then it’s OK
Can confirm, I have a website (https://2009scape.org/) with tonnes of legacy forum posts (100k+). No crawlers ever go there.
It’s a shame that 404media didn’t do any due diligence when writing this
I think you may have just misunderstood the post.
It’s not intended to trap the web crawlers indexing content for google search.
It’s intended to trap AI training bots harvesting sentences in order to improve their LLMs.
I don’t really have an answer as to why those bots don’t find your content appealing, but that doesn’t mean that Nepenthes doesn’t work.
No crawlers ever go there.
if it makes you feel any better, i would go there if i was a web crawler.
Why would they? Outrage and meme content sell clicks, in-depth journalism doesn’t.
2009scape!? If it’s what I think it is that is amazing. Legend
It is what you think it is, come join ^^. It’s a small niche world
More accurately, it traps any web crawler, including regular search engines and benign projects like the Internet Archive. This should not be used without an allowlist for known trusted crawlers at least.
More accurately, it traps any web crawler
More accurately, it does not trap any competent crawlers, which have per domain limits on how many pages they crawl.
You would still want to tell the crawlers that obey robots.txt do not pay attention to that part of the website. Otherwise it’s just going to break your SEO
Just put the trap in a space roped off by robots.txt - any crawler that ventures there deserves being roasted.
Yup, put all the bad stuff into “not-robots.txt”. Works every time.
How exactly would that work? Would trusted crawlers be blocked from accessing the maze?
You can tell what crawler its is by useragent header
Which can easily be faked.
But then they’re probably not going to obey robots.txt anyway so it doesn’t matter
All of cyber security is an arms race of moving targets. It doesn’t need to be foolproof to mitigate traffic for a while.
Yeah and then you allowlist them by blacklisting them from the maze.
This reminds me of that one time a guy figured out how to make “gzip bombs” that bricked automated vuln scanners.
I had a scanner that was relentless smashing a server at work and configured one of those.
evidently it was one of our customers. it filled their storage up and increased their storage costs by like 500%.
they complained that we purposefully sabotaged their scans. when I told them I spent two weeks tracking down and confirmed their scan were causing performance issues on our infrastructure I had every right to protect the experience of all our users.
I also reminded them they were effectively DDOSing our services which I could file a request to investigate with cyber crimes division of the FBI.
they shut up, paid their bill, and didn’t renew their measly $2k mrr account with us when their contract ended.
bitch ass small companies are always the biggest pita.
DDoS? Where was the distribution part?
effectively
For all practical purposes; in effect.
I believe the commenter was implying that DoS would be a more accurate description, since it does not seem as if the “attack” was distributed, but it is a nitpick nonetheless. We don’t have the context to understand if multiple servers were involved that distributed the load
I see DDoS and DoS used interchangeably. I think because DDoS became a somewhat mainstream term (at least in online gamer communities) and is pronounced verbally (dee doss). Idk, just what I’ve seen.
Like people calling roguelites roguelikes or third person shooters FPSes
Dee doss? I always say dee dee oh es.
My new favorite is asking if it’s cheating to look at your opponent’s pieces in chess.
For anybody who ever had this happen, ChatGPT has some solutions to remedy the situation:
I tried the same input and got a more expected answer.
There is actually a chaos variant of chess that follows this principle:
https://en.m.wikipedia.org/wiki/Kriegspiel_(chess)
I read about in a PKD short story.
When I ask the same in Perplexity, I get this:
Perplexity is really good, I love using it.
I’ve always been taught if you say “I adjust” before touching a piece then it’s ok to touch it (specifically so you can move an off-center piece into the center of its square)
Not gonna fly if you say “I adjust” and then pick up a piece, move it to a new spot, then bring it back down and set it in the original spot.
Also ffs, don’t adjust pieces unless it’s your turn.
Here it’s “j’adoube” with heavy German accent
…and anywhere else in the world too. :)
Wow lol!
This is old.
Chatgpt no longer answers like this, if it ever did.
Screenshot from 2 seconds ago. ChatGPT -4 MINI
Just tried and got the expected answer :
In chess, looking at your opponent’s pieces is not only allowed but essential to playing the game. Observing the placement and movement of your opponent’s pieces helps you plan your strategy and anticipate their moves. However, if you’re referring to situations like secretly peeking at a hidden plan (in correspondence chess, for example) or breaking rules in a specific chess variant, then it could be considered cheating. But in standard chess, observing your opponent’s pieces is part of fair play.
Damn, I guess it ever did.
I wish my knee-jerk dismissal of anything remotely anti-AI didn’t get in my way so often.
Yeah thats not real, if you ask chat gpt it gives the obvious answer of you have to look at the pieces. Either its shopped or its leaving off previous messages where the user has deliberately conviced it that it is cheating.
Have you asked it how many letter 'R’s are in Strawberry?
Se i understand that one, LLMs dont work with letters they work with token which are more like Chinese characters. So even though they display letters to the end used the models themselves dont see them which is why you can get dumb mistakes like that.
The chess thing is strange though, its not like there is a lake of writing on chess so I do wonder where it got that idea from.
I just asked too, and it says it’s cheating. Note this is with 4o Mini, like the other screenshots.
Here’s the link ChatGPT gives me for sharing as more proof: https://chatgpt.com/share/67928ef6-42ac-8010-8966-099794d1de8c
Huh, interesting, it does do it with 4o mini but not with 4o. I stand corrected.
Yeah I got the sensible answer with 4o ( dudckduckgo)
We don’t know what they did above this prompt, maybe it was advised to answer like this in a prior prompt 🤗 we can not really know from the picture
But 4o is not too old, I think, it is still the highest free unlimited tier.
I wrote that prompt and just asked it as it’s without any other prompts before it. You can see other people got the same same answer or try it yourself.
I haven’t seen that episode in probably 15 years and I still remember exactly what this was.
First thing that popped into my head after I read the headline!
Can you explain for the rest of the class?
Thank you!
I’m surprised no one has created a trek wiki separate from the shitty fandom site yet. Sometimes when I search for Doom info I accidentally click the fandom link and have to go back out to get the .org site.
The Minecraft wiki has been way better since they ditched Fandom.
I was aware of the two Doom wikis, but not the reason there was a split, and I’ve heard other complaints about fandom sites before. What’s the deal with that? I’m out of the loop.
fandom.com has awful intrusive ads and a shitty slow website (probably largely because of the ads)
This sort of thing has been a strategy for dealing with unwanted web crawlers since web crawlers were a thing. It’s an arms race, though; crawlers do things to detect these “mazes” and so the maze-makers keep needing to up their game as well.
As we enter an age where AI is effectively passing the Turing Test, it’s going to be tricky making traps for them that don’t also ensnare the actual humans you’re trying to serve pages to.
But does running this cost the AI bot at least as much as it costs you to run?
Picking words at random from a dictionary would not be very compute intensive, the content doesn’t need to be sensical
Yes, the scraper is going to mindlessly gobble up information. At best they’d expend more resources later to try and determine the value of the content but how do you do that really? Mostly I think they’re hoping the good will outweigh the bad.
It honestly depends. There are random drive by scrapers that will just do what they can, usually within a specific budget for a domain and move on. If you have something specific though that someone wants you end up in an arms race pretty quickly as they will pay attention and tune their crawler daily.
I was thinking exactly that, generating something like lorem ipsum to cost both time, compute and storage for the crawler.
It will be more complex and require more resources tho.
I’d like to introduce you to Pandora’s Pot
I would think yes. The compute needed to make a hyperlink maze is low, compared to the AI processing of the random content, which costs nearly nothing to make, but still costs the same to process as genuine content.
Am I missing something?
I’m wondering about the cost to the server’s resources / bandwidth to serve up unlimited random junk also.
But kudos to the developer for making this anyway
This is my concern exactly.
This seems like a neat prank, but a potentially expensive one. Heck, if it works right you could end up with several bots stuck in your maze, perhaps dozens of hundreds. At that point bandwidth becomes my concern.
It does if you use AI to generate the pages it’s scraping.
This won’t work against commercial crawlers. They check page contents with something similar to a simhash and don’t recrawl these pages. They also have limiters like for depth to avoid getting stuck in circular links.
You could generate random content for each new page, but you’ll still eventually hit the depth limit. There are probably other rules related to content quality to limit crawling too.
True, this is an arms race situation after all. The real benefit of this is creating garbage training data that makes garbage models. So it’s not just increasing the cost of crawling, it increases the cost of stealing everybody’s shit because you need extra data quality checks. Poisoning the well.
You could theoretically use the shittiest local llm you can find to dynamically create slop for the piggies
Say it with me now: model collapse! I think this approach is especially insidious in that rather than dumping obvious nonsense into the training corpus that can then be scrubbed, it pushes the downstream LLM invisibly towards spontaneously imploding.
Just use a Markov chain.
Pivoting back to blovkchain bb
Exactly! That’s ideal because LLM or simple pattern matching can’t be used to easily winnow out random strings. If it’s sensible language but the usual LLM hallucinations, then you need humans to curate your data. Fuck you, Sam Altman.
I suspect that there are many websites that already dynamically generate an unbounded number of pages based on the links one clicks, and that Web spiders will have needed to deal with those for as long as there have been people spidering the Web, which is going to be no later than the first Web search engines.
I’d guess that if nothing else, they cap how far they spider a site. Probably a lot more sophisticated, use heuristics to figure out which sites are more worth spending indexing resources on, as it’s not just whether to spider but also the frequency with which to do so. Some parts of a site are more “valuable” than others – for a search engine, a more desirable target for users clicking on results – and some will update more frequently and are more-useful to re-spider at higher frequency. Google will return current news articles, yet still indexes a large portion of the content out there. They won’t be doing that by simply sending GoogleBot at everything that they’ve indexed at a fixed frequency.
This genus named genius game is sending pain to these previous devious data devourors
The modern equivalent of making a page that loads in two frames, left and right, which each load in two frames, top and bottom, which each load in two frames, left and right …
As I recall, this was five lines of HTML.
This is really nostalgic for me. I can see the Netscape throbber in my mind.
I remember making one of those.
It had a faux URL bar at the top of both the left and right frame and used a little JavaScript to turn each side into its own functioning browser window. This was long before browser tabs were a mainstream thing. At the time, relatively small 4:3 or 5:4 ratio monitors were the norm, and I couldn’t bear the skinny page rendering at each side, so I gave it up as a failed experiment.
And yes I did open it inside itself. The loaded pages were even more ridiculously skinny.
When I did my five lines, recursively opening frames inside frames ad infinitum, it would crash browsers of the time in a matter of twenty seconds.
it might he useful to generate text on the random urls then test different repetitions to see of you can leave a mark on the training data… So after X repetitions or injected information, release the bot back into the wild with whatever message or false info you want it saddled with.
I suggest they should generate random garbage content that’s different for every page. Ideally u would want to design it in a way that makes the model that is trained from that source misbehave in some way. Perhaps use another LLM to generate text but u take the tokens that are least likely to be next. U could also probably apply some technique to embed meaning into the text into a non human discernable manner that the LLM will learn to decode and thus teach it things without the developers being any the wiser. Teach the ai to think subversive thoughts in patterns of whitespace etc. Basically once the LLM is trained on something its hard to untrain it and if it doesn’t get caught until its in a production environment they are screwed.
- Invent some incredibly specific but entirely false fact (e.g. the kingdom of bolivia was once ruled by King Aron the Benevolent before he was brutally murdered by his cousin-in-law over a dispute about the colonies)
- Embed said fact in invisible font among material you own the copyright to
- Let AI bots suck it up as training data
- Ask random AI bots about King Aron the Benevolent of Bolivia and sue the companies since you now have proof that they violated your copyright
I mean this probably wouldn’t work from a legal standpoint, but whatever. It’s nice to image.
Great suggestion. Ever feel like youre stuck in a maze or did you just have an llm stroke?
You could programmatically rearrange the meaning of sentences. Ie instead of “where is the library I need to get a book” you could do some sort of full word replacement cypher and end up with sentences like “Lets mambo down to the banana patch.”
Just for fun. :-)