My question is, can an AI bot lie in a chat session?
Last week I was in a support chat system and the answers it kept giving me I finally said you must be a bot give me a real person. Of course then my connection crapped out. So if I say, are you a bot, can it lie?
My question is, can an AI bot lie in a chat session?
Last week I was in a support chat system and the answers it kept giving me I finally said you must be a bot give me a real person. Of course then my connection crapped out. So if I say, are you a bot, can it lie?
AI can lie all the time, they call it hallucinating…if it does not have a real answer it will make stuff up based on the word probabilities of its training set. So if that is not lying I don’t know what is. I have challenged ChatGPT to give me references for info it gave, all references were made up not one true or real reference. If the bot is an off shoot of ChatGPT then yes it may lie.
Can they? They do already. There are numerous accounts of AI chatbots and LLMs inventing facts and events.
Though it might be worth making a distinction between what’s referred to as “hallucinating” which is akin to misinformation - factually incorrect but unintentional - to intentional lying to achieve some goal through deception.
One thing we should fear is the use of AI to, for example, create doubt/chaos in public attitudes toward government by intentionally using deceptive or incorrect information to achieve an effect. Or, from the other side, governments using AI to suppress political dissent.
There was an interesting article about one method used by Chinese censors. Instead of confronting and countering some social media post that the government considers undesireable, they just reply with a redirection - often friendly in tone. Sort of the weaponization of the windywave tactic. For example, if someone is trying to get people to protest strict public health measures, they’ll just post a link to this amazing new mask design, etc. AI would be great at windywaving…
As others have said, obviously the 3 Laws are fiction. AI can do pretty much what you program it to do. That said, “lying†implies consciousness and intent to make an untruthful statement. In that regard, I don’t think bots or AI can currently lie, although they can definitely say things that aren’t true.
I think I figured out a way to figure out if it’s a real person or a bot. I can use some words but with intentional misspellings, spaces put in that don’t need to be there, that a person would be able to see and rec 0g nise, that a bot wouldn’t be able to figure out.