Why We Need Strong Laws to Regulate AI
Global guidelines are required to protect human welfare
Why Artificial Intelligence (AI) Requires Protective Guidelines
We shouldn’t only worry about machines adopting human-like behavior – but also about humans becoming like machines!
Most of us have handed over much of our daily lives to technology. We trust our phones to set our alarms, follow GPS navigation blindly, and react to carefully crafted digital advertisements. Many technologies have more or less started taking control of our behaviours and decision-making. Aren’t we beginning to behave like machines?
The convergence of human and machine seems to be closer and more real than anticipated.
Movies and fiction have frequently showcased how robots overtake humans and are uncontrollable. While that might seem an exaggeration, some analysts believe it may present a realistic picture of the future.
Robots and Artificial Intelligence are like our children: they represent The Future. So we need to ensure that we provide the right values and behaviours (read: rules) if we want our future generation to be in safe hands. Just like children need good parenting, robots/AI also need good parenting and global policy makers need to set common guidelines and principles to lay the foundation for a good and safe future.
Isaac Asimov first devised the three laws of robotics (also known as Asimov’s law) in 1942 as part of fiction story. Since then it has gone through several changes.
"The convergence of human and machine seems to be closer and more real than anticipated."
However with recent events ranging from election rigging to fake news, AI has been adding strain to legal boundaries. Today, more than ever, we need new, stronger laws with consensus amongst various international bodies to cover concerns such as:
AI and Robots will obey human orders. This is important to ensure that at any stage humans can take control of robots.
Privacy. AI should be designed to keep information private. We have recently seen the backlash faced by Facebook and Google. Let’s not repeat these mistakes.
Conflict/Exception. AI algorithms and solutions should be designed such that during conflict or exceptions they keep human safety and security as their topmost criteria.
Ethics. AI and Robots should not be able to cheat or ‘game’ the program to their advantage.
Welfare. Lastly, AI should be used only to further the welfare of human society.
Usually, policy decisions lag behind technology developments but the gap should not be so huge that it becomes too late to recover from any damage caused. Now is the time for policy makers to set the rules and implement them.
Every company, organization, institution, scientist, programmer, etc. should be bound by global guidelines, which need to be designed for the overall welfare of human society.
Technology, like the military, needs to be democratically governed by a set of principles as both are meant for human safety and defence. Lack and delay of such policies will only cause more issues to arise, that need to be tackled.
My fear is that it may be too late by then. Now is the time to act. We owe it to the next generation, and thereafter.