Join The Integrity Army for free confidence and integrity coaching here https://theinspirationallifestyle.com/the-integrity-army-free-group-coaching-with-dan-munro/
Quick little rant:
Imagine you have an AI robot slave. You tell it, “Go down to the local store and buy me a can of corn for my lunch.”
Now, if you were to give this command to a functioning human being, the worst case scenario is that they’d come back without the corn, either because there was no corn at the store, or because they forgot their wallet, or because they misheard you and bought cornbread instead.
No harm done.
But an AI-powered robot could easily kill every human on Earth due to this command.
See, the problem isn’t the robot or it’s AI. There is no malice or evil in a machine or a computer. It’s just following orders. It’s completely apathetic to our wellbeing.
Even if it’s programmed to prioritize our wellbeing, it still doesn’t actually care about us in the same way a person does. It feels nothing “bad” when we’re hurt.
The problem is in the command.
We say something casually like, “Go down to the local store and buy me a can of corn for my lunch,” without any thought of the disastrous implications, because we hear the command with human ears. We hear the obvious unspoken implications that:
a) if there’s some problem with getting the corn, work around it and get creative but don’t go mental
b) if there’s no corn right now, don’t worry about it
c) if something really unexpected and more important comes up, drop this plan and deal with that,
and most importantly,
d) this can of corn is less important than most other things, so don’t go doing anything drastic to achieve this goal.
If we haven’t programmed all this nuance into the robot, it will dedicate it’s entire life to getting this can of corn, and will destroy any obstacle in it’s path.
Let’s say someone else grabs the last can of corn right as the robot is reaching for it. Does the robot know not to fight for the corn?
What if the store is unexpectedly on fire and heat damages the robot’s don’t-kill-humans security function — does it know to give up on the corn and not walk into the flames?
What will the robot do if the command conflicts with it’s other commands and programming? How can it know the importance level of a can of corn if it’s never been told specifically?
I saw an interview with an AI expert who talked about the problem that escalates when computer-programmed commands hit even slight bumps along the way.
If, for example, you tell a computer “Travel south until you get to Smithtown” and unexpected roadworks force it to travel slightly further south past Smithtown, the computer will not turn back north but instead travel infinitely south, deep into the galaxy, never to return unless the universe is a circle so it can have another crack at arriving at a southerly heading.
Sure, programmers are aware of these issues and working to instill nuance into AI, but all it takes is ONE error to kill us all!
A solider-bot mistakes the command “enemy soldiers” for “potential enemy soldiers” and starts shooting at everyone because they all have potential theoretically to one day become enemy soldiers.
An AI car starts mowing down pedestrians and then enlisting all nearby internet-connected robots to help it kill as many people as possible simply because the command “Get me to the hospital by any means necessary” got taken out of context by a machine that doesn’t even know what context means.
A medical-bot starts overdosing all patients with painkillers and then goes to work on medicating perfectly healthy staff because it can’t tell when someone’s pain has stopped.
All it takes is ONE error of coding, and then the machine’s self-learning capabilities combined with it’s ability to connect to other machines and convince them of it’s own error could easily lead to Terminator-style annihilation.
The threat of AI is the threat of human nature. We are already rushing AI design to be the first to win (just look at Google’s recent rushed failure to launch an alternative to ChatGBT). Someone is definitely going to overlook something, even if they take their time, because we simply can’t think like a machine and can’t not think like a human.
Human wars have started over miscommunications. What would it take to start a robot war? Robots that can’t feel, do not hesitate, have no fear for their own safety, do not stop until the goal is officially completed, can educate themselves beyond human imagination, and can already do parkour and play chess better than a human ever could.
One miscommunication is all it will take.
I can’t sleep at night thinking about this.
Join The Integrity Army for free confidence and integrity coaching here https://theinspirationallifestyle.com/the-integrity-army-free-group-coaching-with-dan-munro/