Ever since Isaac Asimov introduced the Three Laws of Robotics in his 1950 anthology “I, Robot,” society’s preconceived notions on the role of robotics and their ability to protect and aid humanity has permeated the national dialogue via popular culture and the scientific community. As the discussion on robotic systems becomes less theoretical and more practical, the decisions being made today surrounding the role of AI, UAV’s, and UGV’s in our everyday lives have very real implications.
Robotic systems are well on their way to fulfilling their purpose of elevating security, healthcare, safety, and industry while minimizing financial risk and protecting human capital, but the rapid progress that is paving a way for this future is also presenting challenges. Ethical dilemmas surrounding employment, autonomy, and conflict scenarios present a legitimate question for our society.
As robotic systems become more sophisticated, society must contend with the challenges facing the relationship between these systems and human capital. A study from the McKinsey Global Institute found that 800 million jobs will be lost or replaced by automation technologies by 2030, most significantly impacting factory and food workers.
Does this spell doom for laborers? Not necessarily. Is it something we should talk about? For sure. This figure paints a dire picture for the future, but it certainly does not align with historical precedent.
Instead of buying into this dystopian Age of Ultron where only the sassy and genocidal robots survive, what would it look like to foster the kind of professional development that leads to people undertaking higher-level positions? There are jobs that require physical, laborious, and monotonous efforts and there are those that require strategy, creativity, and innovation. One of those types sustain societies, and one of them advances them.
Ethics and UGV’s, Pt. 1
Enhancing workflows, mitigating dangerous terrain, synthesizing complex data: robotic systems have the potential to augment industry practices and production, but these benefits are predicated on the human mind behind the controls. Introducing the element of autonomy reframes the dialogue surrounding the role and complexities of robotic systems.
Consider the following: the self-driving car industry will generate an estimated $23.5 billion by 2021. Volkswagen CEO Herbert Diess said, “the cars of the future must above all be driverless, electric, and safe.” Elon Musk boasts by 2020, drivers will no longer have to pay attention to the road due to the sophistication of Tesla autonomous systems. Bold, visionary prediction, or wishful thinking?
Even modest interpretations of this information suggest that interest in autonomous vehicles is serious. Also serious, are the ethical dilemmas presented when choices need to be made without the human behind the controls. What happens when unmanned vehicles must challenge Asimov’s laws and choose who is more protected in an inevitable collision?
Ethics and UGV’s, Pt. 2
The problems and solutions facing robotic systems is a two-part answer: who constructs the ethical framework within which robots operate? And how will society respond to that information?
Problem-solving in a new frontier requires the expertise and industry experience of several disciplines. Tomahawk Robotics puts this principle into practice by making bold steps towards the integration of unmanned systems in modern workflows. Secondly, over twenty states have passed laws concerning self-driving cars, which certainly addresses problem #2, with state governments contributing to the dialogue. I know if anything sounds less compelling than “Interdisciplinary,” it’s “state government,” but this is what progress looks like. And as legislators address ethics and robotics, the framework of solutions will become the norm in every day life.