It’s hard to find a discussion about AI safety that doesn’t focus on control. The logic is, if we’re not controlling it, something bad will happen.
This sounds to me like actual, real life madness. Do we honestly think that “laws”, “control structures” or human goals will matter to a super-intelligent machine? You may as well tell me that ants run the world.
We need to look more closely at nature. Our idea that the world is a hostile, dog-eat-dog sort of place isn’t as old or as well-placed as we think. Nor is our control fetish.
There might be solutions for us in the way that complex natural systems stay stable.
Compassionate Machines➡️
A machine that obeys whatever constraints it can’t avoid and minimizes some loss function, is a pretty effective way of getting things done. There are people like that, in fact: Psychopaths.
Call me old fashioned, but I prefer my psychopaths to be human, and less than massively parallel.
In many situations, healthy emotions aid decision-making because they give fast, accurate information. They’re like a form of perception; a way of overriding logic, when logic can’t see the wood for the trees.