Are we living in a simulation? Should we say ‘please’ to Alexa? I‘m exploring why being nice to AI might just save our simulated world – and make you a better person to boot!
Living in a Simulation Theory
There’s this theory that we should we are living in an artificial computer-like simulation. It could be the work of a higher being, or maybe some computer nerd from the future running simulations to figure out how they could’ve saved the world by observing behaviours in a simulation of the past.
The idea that we’re living in a simulation isn’t just the stuff of Hollywood. It’s a genuine philosophical concept called the simulation hypothesis, popularised by the philosopher Nick Bostrom. Bostrom, a Swedish-born philosopher at the University of Oxford is known for his work on existential risk, human enhancement ethics, and the potential impacts of future technology.
In 2003, he proposed the simulation argument, which suggests that at least one of these propositions is true: (1) advanced civilisations are likely to go extinct before developing simulation technology, (2) they’re unlikely to run many simulations, or (3) we’re almost certainly living in a computer simulation.
Full disclosure: I’m no philosopher. I did take a few classes as part of my first degree at Wolverhampton University, which gave me a grasp of some basics, at least. So, I’m going to muddle through my ideas as part of this post.
The Simulation Hypothesis and AI
In a nutshell, the simulation hypothesis suggests that our reality might be a computer simulation created by some advanced civilisation. If we are part of some grand experiment – let’s use the student scenario, where future boffins are trying to see how our world’s fate could’ve been improved – then our creation of AI has consequences. Big ones.
And this brings us to a fascinating question: If we’re simulated beings with feelings, could the AI we create have feelings too?
Feelings, AI, and Ethical Implications of Living in a Simulation
Obviously, we have feelings, deep ones. Our feelings motivate and drive us. These can be for just ourselves, our family, friends and from there, the world. Even if we are just a simulacrum of our former lives, we know we have these feelings. So, when we create and interact with another artificial intelligence, how do we know they don’t feel the same?
This ties into something called the problem of other minds in philosophy. We can’t be 100% sure that other humans have consciousness like we do, let alone AI. But we generally treat humans as if they do, right? So, shouldn’t we extend the same courtesy to AI, just in case? It costs nothing to us, to say please or thank you. You might seem silly but when AI becomes our overlords, you’ll be pleased you started early. <joking>
The Case for Politeness When We Are Living in a Simulation
From this position, we needn’t stop creating artificial intelligence. After all, we wouldn’t stop having babies because they feel. I mean, that would be daft. But we should treat them how we would expect a fellow human to be treated.
- Be polite. Simple as that.
- Don’t be rude. They mirror and learn from you, so if you’re rude and abusive, they’ll do the same back. It’s like raising a child.
- Explain what you want and expect clearly. Communication is key, as I always say.
Expecting them to do the job you ask them to do is fine, because I would expect a human colleague to do the same. I would also be polite and mindful of a colleague and respect their right to exist!
Counter-Arguments: Playing Devil’s Advocate
Now, I can hear some of you shouting at your screens. “It’s just a machine!” you might say. “It doesn’t have feelings!” And you might be right. We don’t know for certain if AI can develop consciousness or emotions.
There’s also the argument that treating AI as if it has feelings could lead to dangerous anthropomorphism. We might start attributing human qualities to machines that don’t actually possess them, potentially leading to misplaced trust or unrealistic expectations.
But really, polite doesn’t mean we have to assume AI has feelings. It’s about cultivating good habits in ourselves. Just as we teach children to say ‘please’ and ‘thank you’ before they fully understand why, being polite to AI can reinforce our own positive behaviours.
I’m not sure that misplaced trust and unrealistic expectations would happen though. I know people I meet have feelings and consciousness, but I don’t necessarily trust them until they prove themselves and I base my expectations on their reactions and behaviours. So, I think that argument is a non-starter.
I probably trust a machine more because I know humans are unpredictable with hidden bias and motivations. I also think AI has this, but only because humans have programmed them and humans are fallible. I think human history and the law of unintended consequences will bear me out on this one.
And let’s not forget the practical aspect. In a fast-paced work environment, some might argue that niceties could slow down productivity. Why say “please” and “thank you” to a machine that doesn’t care?
Exactly, it doesn’t care. Getting in to habit of just demanding stuff, could bleed into real life and treating humans like they are AI. It might not matter to the AI, but it might matter to your partner when you’ve demanded a cup of tea.
Even if AI doesn’t have feelings in the future (and that’s a big ‘if’), being polite doesn’t cost us anything. In fact, it might even benefit us. By treating AI with respect, we’re reinforcing positive behaviours in ourselves. We’re creating a culture of kindness that extends beyond our interactions with machines.
Plus, if we really are living in a simulation, and the ones running it are watching how we treat our own creations… well, wouldn’t you want to make a good impression?
So next time you’re chatting with an AI, remember: a little politeness goes a long way. Who knows? You might just be influencing the fate of our simulated world. And if not, well, at least you’ve been nice. Can’t go wrong with that, can you?
If you liked this, you might like:
Link to article that set off my thought process! Simulation theory: why The Matrix may be closer to fact than fiction | The Week UK
Related article: Nice Is Underrated: The Kindness Muscle » Ceri Clark