Appoxo@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · 4 个月前Cloudflare plans marketplace to sell permission to scrape websitestechcrunch.comexternal-linkmessage-square14fedilinkarrow-up1159arrow-down14cross-posted to: technology@lemmy.ziptechnology@lemmy.ml
arrow-up1155arrow-down1external-linkCloudflare plans marketplace to sell permission to scrape websitestechcrunch.comAppoxo@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · 4 个月前message-square14fedilinkcross-posted to: technology@lemmy.ziptechnology@lemmy.ml
minus-squareRikudou_Sage@lemmings.worldlinkfedilinkEnglisharrow-up22·4 个月前Put a page on your website saying that scrapping your website costs [insert amount] and block the bots otherwise.
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up15·4 个月前The hard part is reliably detecting the bots
minus-squaremelroy@kbin.melroy.orglinkfedilinkarrow-up5·4 个月前Also you don’t want to block legit search engines that are not scraping your data for AI.
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up7·4 个月前Again: hard to differentiate all those different bots, because you have to trust that they are what they say they are, and they often are not
minus-squaremelroy@kbin.melroy.orglinkfedilinkarrow-up5·4 个月前Instead of blocking bots on user agent… I’m blocking full IP ranges: https://gitlab.melroy.org/-/snippets/619
minus-squarevinnymac@lemmy.worldlinkfedilinkEnglisharrow-up4·edit-24 个月前It certainly can be a cat and mouse game, but scraping at scale tends to be ahead of the curve of the security teams. Some examples: https://brightdata.com/ https://oxylabs.io/ Preventing access by requiring an account, with strict access rules can curb the vast majority of scraping, then your only bad actors are the rich venture capitalists.
Put a page on your website saying that scrapping your website costs [insert amount] and block the bots otherwise.
The hard part is reliably detecting the bots
Also you don’t want to block legit search engines that are not scraping your data for AI.
Again: hard to differentiate all those different bots, because you have to trust that they are what they say they are, and they often are not
Instead of blocking bots on user agent… I’m blocking full IP ranges: https://gitlab.melroy.org/-/snippets/619
It certainly can be a cat and mouse game, but scraping at scale tends to be ahead of the curve of the security teams. Some examples:
https://brightdata.com/
https://oxylabs.io/
Preventing access by requiring an account, with strict access rules can curb the vast majority of scraping, then your only bad actors are the rich venture capitalists.