The FSF is ideologically incapable of implementing bot management strategies because they value internet anonymity. Bot evasion strategies are literally just advanced ways of being anonymous. You can’t tell if a person is a bot or not without violating their anonymity or internet freedom.
Anubis is a modest compromise. It checks if you can run JavaScript and blocks those who can’t. It’s not perfect. It’ll block elinks/lynx users or a real person using curl. But it’ll also block any bot that doesn’t use a browser, which accounts for most of the volume.
The “cryptomining” and “malware” comparisons against Anubis or hyperbolic but sort of true. Proof of Work is the dumbest and most wasteful possible strategy to combat bots. It’s not the hashing that stops bots, its the check if they can run JavaScript that does.
Anubis has a new javascriptless metarefresh which uses HTML to refresh the page after a few seconds. This is a much better solution than the computational proof of work, in my opinion. This line from the docs though is perplexing:
This is not enabled by default while this method is tested and its false positive rate is ascertained. Many modern scrapers use headless Google Chrome, so this will have a much higher false positive rate.
The false positive rate will be the same as proof of work minus however many bots run headless browser with JavaScript disabled. Proof of Work doesn’t give you positives or negatives, it’s a flat tax.
It’ll block elinks/lynx users or a real person using curl. But it’ll also block any bot that doesn’t use a browser, which accounts for most of the volume.
nuh uh
Anubis has a new javascriptless metarefresh which uses HTML to refresh the page after a few seconds. This is a much better solution than the computational proof of work, in my opinion. This line from the docs though is perplexing:
yuh uh!
(I’m just fucking around now I’m too tired lol, it’s 10 pm. But it does work on lynx!!!)
Just want to reiterate, I understand the concerns with Anubis, it’s just that FSF makes me go
The Anubis homepage uses the new meta-refresh strategy which should work on lynx since its doesn’t use JavaScript.
I doubt you even saw an Anubis challenge. Anubis normally configured by User-Agent. IDK what lynx’s User-Agent is but I bet Anubis wasn’t configured to challenge it.
The FSF is ideologically incapable of implementing bot management strategies because they value internet anonymity. Bot evasion strategies are literally just advanced ways of being anonymous. You can’t tell if a person is a bot or not without violating their anonymity or internet freedom.
Anubis is a modest compromise. It checks if you can run JavaScript and blocks those who can’t. It’s not perfect. It’ll block elinks/lynx users or a real person using curl. But it’ll also block any bot that doesn’t use a browser, which accounts for most of the volume.
The “cryptomining” and “malware” comparisons against Anubis or hyperbolic but sort of true. Proof of Work is the dumbest and most wasteful possible strategy to combat bots. It’s not the hashing that stops bots, its the check if they can run JavaScript that does.
Anubis has a new javascriptless metarefresh which uses HTML to refresh the page after a few seconds. This is a much better solution than the computational proof of work, in my opinion. This line from the docs though is perplexing:
The false positive rate will be the same as proof of work minus however many bots run headless browser with JavaScript disabled. Proof of Work doesn’t give you positives or negatives, it’s a flat tax.
nuh uh
yuh uh!
(I’m just fucking around now I’m too tired lol, it’s 10 pm. But it does work on lynx!!!)
Just want to reiterate, I understand the concerns with Anubis, it’s just that FSF makes me go