I wonder if one can build maze webpages to trap these AI crawlers. So if it's a human it doesn't bother, but once identified as a crawler it dynamically generates webpages after webpages of garbage. It doesn't need to save all those garbage but the crawler has to.
For what it's worth: they do honor the robots.txt file. I had the same problem with a client's CMS and denying all AI crawler user agents did the trick.
It's clear they've all gone mad. The traffic spiked 400% overnight and made the CMS unresponsive a few times a day.
They are decided to set the web on fire:
how are the links structured in the ahref tag? is it relative or absolute? if relative, then thats prob why.
Relative.
For example, the page:
contains:
href="2-post"
Browsers and other bots like Google Bot correctly interpret this as a link to
While OpenAI crawler goes to:
https://website/blog/1-post/2-post
I wonder is there some way to report this bug to them?
Google recommends absolute urls:
https://web.archive.org/web/20221208150134/https://www.webma...
I actually think OpenAI is right, unless you have a base url tag? That’s a relative url and its relative to the current url you are on, not the root domain
Cloudflare should provide a service (paid or free) to block AI crawlers.
They actually already have that!
You can find it at Security > Bots > Block AI Bots
They believe they can take market share from Google, which currently has a mkt. cap of over $2T, so with that amount of money in line, they don't care if they will hammer down the internet or the amount of hate & lawsuits they will get.
The issue is that they don't understand that the search business took decades to develop and be what it currently is. And is only so profitable to Google because they hold a monopoly because the US is an oligarchy.
The stuff OpenAI is building has been proven to be easy (and expensive) to replicate, with many competitors having posting similar results, even while starting later.
Whatever new iteration of the search business they will develop will likely mean profits will be smaller, but nobody cares as long as there are billions being invested in this space.
Not to mention their AGI goals. When you can't reliably trust their software to answer basic questions.
So, currently we are at the internet of trash age. We now have trash content being generated, trash bots hammering your tiny website and trash ambitions.
I doubt this CAPEX will go on for more than 2 years, once the bubble burst companies will start reviewing what they built and they will fix the Crawler bug you've just mentioned.