Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.
Alright so what if there’s 500 undisclosed bots running around and they have various quality of interactions or if there’s a constant influx of new LLM bots?
I don’t understand why you want to do this so badly? I don’t care how a bot responds. Why would I want to have a conversation with one on social media?
Gotta love it when my questions get ignored and I just get met with another question, but since I’m the only one who’s certain about whwt they’re doing. It’s evolution, I want to see how systems change and how dynamic they can be, maybe something I never thought of emerges
If any of those end up interacting with me, or I otherwise see them on my timeline, they’ll get treated appropriately: reported, blocked, or in extreme cases, served garbage interactions to. Serving garbage to 500+ bots is laughably easy. Every day I have over 5 million requests from various AI scrapers, from thousands of unique IP addresses, and I serve them garbage. It doesn’t make a blip on my tiny VPS: in just the past 24 hours, I served 5.2M requests from AI scrapers, from ~2100 unique IP addresses, using 60Mb memory and a mere 2.5 hours of CPU time. I can do that on a potato.
But first: they have to interact with me. As I am on a single-user instance, chances are, by the time any bot would get to try and spam me, a bigger server already had them reported and blocked (and I periodically review blocks from larger instances I trust, so there’s a good chance I’d block most bots before they have a chance of interacting with me).
This is not a fight bots can win.
I’m i can understand blocking 500 bots you think are garbage. I guess you can deal with the traffic of bots but not so much their content