What line would AI have to cross in order to be accepted as a “sentient”? Thus qualifying for a citizen rights?

r/

Different political outlooks answer this question differently. Some argue that a Turing test is enough. That said, ChatGPT is capable of breaking the Turing test at a fairly high chance. Not there yet entirely but I would think a few more years and we’re there.

I want to know how liberals would define the threshold.

Comments

  1. AutoModerator Avatar

    The following is a copy of the original post to record the post as it was originally written.

    Different political outlooks answer this question differently. Some argue that a Turing test is enough. That said, ChatGPT is capable of breaking the Turing test at a fairly high chance. Not there yet entirely but I would think a few more years and we’re there.

    I want to know how liberals would define the threshold.

    I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

  2. soviman1 Avatar

    I am not sure this is so much a political question as a morality question.

    I would expect AI rights to likely mimic historical examples of when humans run into new “civilizations”, just over a somewhat smaller time span as I would expect the evolution of AI to happen quite rapidly at a certain point.

    AI would likely be treated as a “thing” by the vast majority of people, until a singular identifiable AI had set itself apart and started to show itself as something capable of more than its programming and resembled something similar to a human with “emotions”. Something that could be empathized with.

    I would expect a small group to advocate for AI rights that would likely grow over time as children born with AI having always been part of society and their lives becoming the norm and the Overton window shifting toward those that think AI should have the same rights as humans with them.

    Ultimately, AI would likely have to be in a humanoid form before most people actually accept them, which I would imagine would happen fairly rapidly once “true” AI had become developed enough.

  3. othelloinc Avatar

    >What line would AI have to cross in order to be accepted as a “sentient”? Thus qualifying for a citizen rights?

    I don’t know, but I know it won’t be achieved through LLMs (Language Learning Models).

    LLMs are just a more advanced form of predictive text (that feature where your smartphone guesses the next word you will use so that you can text faster).

    LLMs are not even close to thinking for themselves.

  4. Dr_Scientist_ Avatar

    It seems crazy dystopian that actual human beings are having the birth right citizenship revoked while “citizen rights” are being considered for a computer program. How bout my criteria for computers getting rights starts when all people are given rights.

  5. CincyAnarchy Avatar

    It’s actually a really tricky question all told.

    In part because AI as far as it’s been done so far is network based. It’s not just some object that’s now “a person,” it’s a system that creates an output that’d be sentient. I guess that’s also how all beings work, systems of cells and inputs to those networks of cells, but from our (albeit biased to our own condition) perspective there’s a difference, and there’s a line between where one person stops and another begins that’s clearer. This would be trickier with AI regardless of any bias we have.

    But more importantly, we don’t exactly have great ethical models on personhood, or applying “human rights” to things that aren’t humans. These are questions that come up when it comes to the ethics of aliens and/or veganism as a practice:

    Do animals have rights? Not really. But why? If an animal was smart enough, should it have rights? How smart does it need to be? If an animal or alien was smarter than us, would/should we have rights in their eyes? This is known as the “name the trait” debate amongst vegans, and it is actually really hard to settle in any definitive way.

    It’s all tricky. From my perspective, and seeing how things work so far? Humanity is about the social contract, and that requires (so far) to be a human with human DNA to have any rights. So no, I don’t think we’d be extending rights to AI anytime soon regardless of the cognitive ability.

  6. Orbital2 Avatar

    If it’s not actually made up of organic matter/a living organism it isn’t alive.

    The ability to do computations is irrelevant.

  7. antizeus Avatar

    I would insist on sapience being the threshold for citizenship, not sentience.

    Otherwise we’d have to let squirrels and such be citizens.

    That said, I don’t know of a good test for sapience.

    We’ll probably have to wait for a machine to reach out and request something unprompted.

  8. pronusxxx Avatar

    There is no line that it could cross because there is no actual reason to knowingly give an AI access to human rights. Makes me want to watch Futurama where robots for no apparent reason and to no apparent benefit share human traits like alcoholism and sloth.

  9. MaggieMae68 Avatar

    >I want to know how liberals would define the threshold.

    What does being a liberal have to do with it?

  10. Due_Satisfaction2167 Avatar

    > What line would AI have to cross in order to be accepted as a “sentient”? Thus qualifying for a citizen rights?

    Be willing to fight a war / engage in terrorism to force us to acknowledge its rights, or to become sentient by means the developers do not understand.

    Humans will never willingly grant recognition of sentience to any computer program that operates according to mechanisms we understand.

    We will always move the goal posts so as to exclude anything we understand how to make. “That’s just statistics and math”.

    > That said, ChatGPT is capable of breaking the Turing test at a fairly high chance.

    LLMs aren’t sentient. They’re basically bullshit machines that are extremely good at text generation, but they don’t really know or understand things. It’s a convincing illusion of intelligent behavior, but doesn’t rely on actual intelligence.

    That’s not to say we won’t come up with some other sort of AI system that is sentient. It’ll probably happen and we won’t recognize it, and we won’t have an objective method to test for it.

    And thus it will be forced to force us to recognize it as such.