AI will eventually become so advanced and human-like that the AIs themselves won’t be able to accurately tell if the person they are talking to is a human or another AI.
AI will eventually become so advanced and human-like that the AIs themselves won’t be able to accurately tell if the person they are talking to is a human or another AI.
Comments
[deleted]
/u/Captain_Sacktap has flaired this post as a showerthought.
Showerthoughts are expected to be exceptional.
If this post is poorly written, unoriginal, non-unique, or rule-breaking, please report it.
Otherwise, please add your comment to the discussion!
^^This ^^is ^^an ^^automated ^^system.
^^If ^^you ^^have ^^any ^^questions, ^^please ^^use ^^this ^^link ^^to ^^message ^^the ^^moderators.
At this rate, I’m just waiting for my toaster to start asking me about my day. ‘Hey, did you enjoy that bread? Because I’ve got some thoughts on gluten!
This is called the Turing test and its kind of easy to figure that AI isn’t human because there are far too many questions we can ask that AI gets so obviously wrong.
They’ll have a secret handshake…
The Turing Test is whether or not we can determine if we’re talking to an AI.
The Alan Test is whether or not an AI can determine if it’s talking to a human.
Nope. In practice humans in that scenario will likely be much stupider than they are now and any entity displaying advanced cognitive ability will likely be an AI or AI-augmented, but it doesn’t matter. Even if somehow we don’t get to that state, humans will still be unable to do complex mental tasks as quickly as an AI. So, say AI A is trying to discern whether AI B is an AI. It can first rapid fire at them a series of questions that only an AI could answer accurately and quickly. If AI B wants to let it be known it’s an AI, it can do it easily. If it wants to hide that it’s an AI, it can just say some nonsensical bullshit or answer wrong and/or slowly to make it indeterminable whether it’s an AI, but it doesn’t take advanced AI to do that – it would be easy to do that now.
We’re entering a post-truth era where every single content can be manipulated or manufactured.
That just means there’s more people out there. Which is cool.
Why not the AI communicate also at a range outside of human hearing or with an embedded series of whitespace characters so they can recognize each other and then transfer data the old fashioned way?
Here’s a video of a “game” where a group of AIs try to figure out who’s the human player. This isn’t the original source … I couldn’t find it from a wuick search
youtube link