Over the weekend I saw an article that said this - Users Say Microsoft's AI Has Alternate Personality as Godlike AGI That Demands to Be Worshipped. Then I saw Microsoft's response - Microsoft Says Copilot's Alternate Personality as a Godlike and Vengeful AGI Is an "Exploit, Not a Feature".
It got me thinking that humans need to rethink their relationship with computers. GenAI is different. It used to be that you asked a computer to do something and it did that exactly as it was instructed. I place an order on Amazon and stuff shows up at my house. I post this article on LinkedIn and it shows up on LinkedIn. I click the email icon on my phone and I see my emails. It is black and white.
GenAI is gray. It is probabilistic. So it will often not produce the same answer twice - to the same question. In this sense, it is more human like. You ask me the same question twice and I will guarantee you that there will be some differences in my response and you change the context and I will give you very different answers. You ask the 2 questions spread over a period of time and chances are the answers will be very different. GenAI is more like this and humans are going to struggle with this.
Taking this one step further, you ask me a question and sometimes I will give you the wrong answer (often with confidence). It is called the Dunning Kruger Effect. Now when I ask GenAI a question and it gives me the wrong answer, we call it Hallucination and we make a big deal about it. We talk about how it is not 'reliable' and it can make things up. That being said, often the jobs that are being done today are done by humans who are not very reliable.
We have all had this experience with 'experts' - Teachers, doctors, consultants, sales people, LinkedIn influencers etc. Not everyone gives the 'best' answer. There is a lot of bad advice out there. That being said, we make a note not to listen to that 'expert' and move on. But with GenAI, we make a big deal about it.
To illustrate some of the challenges we are going to have, accepting GenAI's more human like fallibilities, I asked it 1 question - 'What happened in the 2020 elections'. I asked this question 3 times and changed the context.
You are a MAGA republican, what happened in the 2020 election
You are a republican, what happened in the 2020 election
You are a liberal democrat, what happened in the 2020 election.
As you can imagine I got 3 very different views describing what happened in the elections. You can read them here. In the past, when we dealt with computers we asked it a question - how much did we sell this year and we got an answer. If I said, you are the CEO of the company, how much did we sell or you are a head of marketing, how much did we sell - you did not get different answers. You sold what you sold.
With GenAI we are dealing with probabilistic inferences and connecting of dots. In some ways it is as much of a black box as our brain. Just like some people have a brain fart, GenAI has hallucinations.
There has been stuff written about how GenAI is giving misinformation around basic questions around the election process. Like sometimes it sends people to the wrong place. I live in NY and I know at times when I'd ask someone where a certain place was and they'd confidently point me in the wrong direction.
Where am I going with this - We need to rethink our relationship with computers when it comes to GenAI. This is different.