I’m a millennial and feel like growing up with tech and feel like I can generally see what’s AI on videos and what’s not. But I do not get it right 100% of the time. My parents, practically believe every video they see on the net. Though
Comment sections are generally more of a main concern. I feel the rich are using comment sections to AstroTurf opinions and most of the ‘edgy’ out there comments that are ragebait are mostly AI bots. Always faceless. Always a flag of a country. Always a super short sentence. No further discussion other than the initial edgy comment. I feel that more than ever, the dead internet theory is becoming truer and most people are just interacting with bots.
Comments
Please help keep AskUK welcoming!
When repling to submission/post please make genuine efforts to answer the question given. Please no jokes, judgements, etc.
Don’t be a dick to each other. If getting heated, just block and move on.
This is a strictly no-politics subreddit!
Please help us by reporting comments that break these rules.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
I get what you mean. The tricky part is that online spaces feel less human than they used to, but I think teaching media literacy, how to question sources, spot bots, and think critically, might be the best defense. Even just asking who benefits from me believing this? goes a long way.
Watch this get downvoted by all the bots….
I’ve just started to treat every comment as if it were made by a human being who thought deeply about what they wanted to say and truly believe it.
Avoids the usual arguments about talking down to people (which can include accusing people of being bots) and just allows us to get down to the nuts and bolts of the conversation.
What Is this exactly? Keep hearing about It.
Is it basically saying when you’re arguing with someone online they could be a BOT?
Interesting viewpoint on generations. In my experience (no idea which “gen” system is in use here, they vary so much) the older people are, the more understanding and rejecting they are of the bot driven bullshit on the net these days.
People in their 60s and older grew up with the Internet when it was, for want of a better word, genuine.
No popups, ads you could turn off, no personal social media, real websites with actual useful content etc etc.
The younger generation is mostly ok too, they see social media for what it is.
But the 25-45 age bracket? God help us!
Stupid question.
When people say bots do they mean literal programs or is it a derogatory term for a human spamming a comment section/posting?
Yep absolutely.
There needs to be advisory examples plastered everywhere with the latest examples of fakes.
One of the biggest problems is bots that use LLMs to drum up shit.
I’ve noticed a huge increase in month old reddit accounts with formulaic names, incredibly strong opinions, and daft levels of engagement.
Honestly, services need to invest in ways to completely remove non human engagement from their products.
I honestly have no fucking clue anymore – the only solution that I see as viable is trying to stay off the internet as much as possible. The world seems many order of magnitude brighter when you aren’t constantly being bombarded by the crap on all social media platforms these days.
Sorry my friend, but there is no way you can detect AI images or text unless it is really cheap slop, like what you see on Facebook (it is aimed for old people anyways).
As a network engineer, my advice is:
Treat every picture or video you see through a screen as fake until proven otherwise.
Treat every interaction on the Internet as fake, unless it is with somebody you know in real life. Always make sure you are talking with that person before saying anything.
Internet is a dangerous trap preying on the modern people, starved for human interaction and social contact.
I find this a lot with gender war type conversations. There’s obviously areas which are very measurable and backed by evidence. But you get areas that are just people online saying things, and people take that as hard evidence of reality.
I am a younger millennial, so I got a lot of time to grow up where social media was just real life friends. And I had a lot of real life experiences. I do feel that in general my experiences of real life follow similar patterns to societal data that’s been collected. You could say that’s confirmation bias, but as a human who’s existed in the world and lived in many different locations and countries I do feel fairly grounded in reality. But I regularly come across people where I get a strong impression they have been fairly sheltered from human interactions, and their learning of how people engage has come from the internet. I find that quite worrying, as I regularly see posts that clearly are fictional and designed to create friction. And I see these posts become really popular.
I feel like so much out there just isn’t truthful, but if this is where people are getting their core social experiences from then what are we teaching people about life? No wonder people are getting depressed and we have a subset of young men who hate society and think women are evil. If they aren’t actually speaking to women, what else do they have to go by?
At this point we need to just scrap it. Start a brand new internet from scratch
I wouldn’t entirely rule out people being capable of failing a Turing test…
I think just generally teaching people to actually practise critical thinking is whats needed.
It has wider uses than just whatever happens to be going on right now, our media sources wouldn’t be able to lie with impunity if the readers were regularly questioning the validity of what was being reported.
Its a skill thats been lacking since before I was in school.
I’m more concerned about the next generation. We grew up being told wikipedia is full of shit and never trust anything on the Internet. They are growing up asking chatgpt every question that pops into their heads and most of the adults around them are modelling the same behaviour, no questioning of the source or accuracy
In the nicest way I can phrase it, the old clueless will die but the new clueless must teach the next generations
We need to teach people about scams and propaganda. The focus needs to be less on a true/fake binary and more about how “authenticity” is used as mask for advertising and propaganda.
The flag + noface thing is nonsense, sometimes its a random face from the ibternet with a randomly generated word+number for example.
Keep up your good workings 🇬🇺
I don’t think it’s quite there yet but I do believe it could get worse.
I genuinely despise AI and I don’t think anything will come out of it as it’s just data being warped with other data that belong to us. You know open AI are also using people’s comments on Reddits for their resources, among others such as books and even retracted science papers? So basically it’s not reliable. I wouldn’t even say it’s AI, it would imply it has a conscience and understanding what it’s saying when that is not the case.
You are probably right.
At a basic level, the technology and programming needed to create spam accounts and spam comments for whatever cause, commercial or political, are highly available, cheap and easy to wield. Platforms are also not great at detecting and eliminating such practices, and are probably engaging in it themselves or allowing other commercial or political actors to do so.
Taking this into account, it is very plausible that a huge array of different concerns across the globe are constantly using spam bots for a multitude of different purposes and this is generally bogging down and destroying the internet user experience, recently super-charged by the introduction of AI.
Now, is it a conspiracy or a just a hunch that, say, for example, Meta is allowing Israel and the US to control speech and content on their platforms to promote specific ideological goals? That is already proven.
Basically what you’re seeing is the mega-garbage content creation. The online equivalent of combining every single piece of food in your house into one dish. A murky mess. However, some actors are obviously stronger than others.
You are probably right.
At a basic level, the technology and programming needed to create spam accounts and spam comments for whatever cause, commercial or political, are highly available, cheap and easy to wield. Platforms are also not great at detecting and eliminating such practices, and are probably engaging in it themselves or allowing other commercial or political actors to do so.
Taking this into account, it is very plausible that a huge array of different concerns across the globe are constantly using spam bots for a multitude of different purposes and this is generally bogging down and destroying the internet user experience, recently super-charged by the introduction of AI.
Now, is it a conspiracy or a just a hunch that, say, for example, Meta is allowing Israel and the US to control speech and content on their platforms to promote specific ideological goals? That is already proven.
Basically what you’re seeing is the mega-garbage content creation. The online equivalent of combining every single piece of food in your house into one dish. A murky mess. However, some actors are obviously stronger than others.