Artificial Intelligence: The Ethical & Safety Concerns Engineers Need to Consider in their Design

Every year businesses are investing more in artificial intelligence research and development around the world with an estimation reaching $110 billion annually by 2024.  AI is expected to be one of the biggest disruptive technologies of our lifetime by increasing efficiencies and reducing costs for many businesses.  Industries from auto manufacturers to lenders are all looking to add AI technology to their products and services.  The essence of AI is to try and mimic human cognitive abilities in a computer.  Sounds like a noble goal, but will the same accountability that humans have for their actions and judgement be applied to a computer?  How can we design and implement human morality and ethics into an AI system?  Can we create independent AI systems that ensures human safety?

In order to answer these questions let’s first start with two fields in human morality: bioethics and human rights discourse.  In democratic societies, all humans are equal under the law with unalienable rights.  An autonomous AI system needs to be designed with safe guards to protect from violating an individual’s social, political and legal rights.  These philosophies have been used for decades in other fields such as medicine.  Every doctor who takes the Hippocratic oath vows to first do no harm and they learn about the individual autonomy and informed consent of their patients.  How can we design the protection of an individual’s interests and well-being into AI technology?

Most artificial intelligence algorithms are dependent on ingesting tons of data.  Where is that data coming from and is it comprehensive enough to have an unbiased nature?  A growing concern in our society is a person’s right to privacy in their data, so the development team for an AI system needs to consider where and how they acquire the data that is used to design their product, if they received consent by the owner of the data and that they abide by any local or international laws involving data privacy.  In addition, the team needs to consider any internal bias they have when collecting the data and creating the AI algorithms.  Did they leave a set of the data out based on a biased opinion?  In many industries looking to incorporate AI technologies especially in medicine, the product needs to be designed with enough transparency in its algorithms and outputs to provide informed consent for its users.  Also, businesses need to consider as they implement AI systems into their operations that customers often prefer a human-to-human interactions.

A company needs to have safety as a high priority when designing and testing AI products, not only for the ethical standpoint of the issue but to protect the company from future liability.  Even though there are not many laws and regulations on the books today directly addressing AI technology, there is a growing public concern that could lead to new laws that companies should be ahead of when considering their design, test standards and processes for AI products.  Specifically, companies need to consider the accuracy, reliability, security and robustness of their products to ensure its safety in the community.

About Lindsey Edwards

With almost 15 years of experience providing data science, automated control system design and software development solutions in the Aerospace Industry, Lindsey Edwards brings cutting-edge analytics to solve today’s business problems.