AI Ethics: Navigating the Landscape

Last year, some of the most influential tech gurus signed an open letter to pause AI development to develop AI ethics. The letter was signed by 20,000 people, including Elon Musk, Andrew Yang, and the philosopher Yuval Noah Harari.

The goal was to work together to develop and implement shared safety protocols for advanced AI design and development. These protocols would be rigorously audited and overseen by independent outside experts to ensure the safety of AI technology. What has been done since 12th April 2023? Were nine months enough to achieve that enormous task? Seriously, what is lying underneath all of this?

screenshot of open letter regarding AI ethics, signed my Elon Musk and others.

In this blog post, we dive into the origins of AI ethics, discuss whether AI can have ethics, and look into what the ethical concerns regarding AI are today.

The origins of AI ethics

The letter asks for AI systems to be “more accurate, safer, interpretable, transparent, robust, aligned, trustworthy, and loyal”. Also, it asks for more governmental regulations, independent audits before training AI systems. It also points to the need for more “tracking highly capable AI systems and large pools of computational capability”. And finally more “robust public funding for technical AI safety research”.

All this belongs to the ethics domain, a philosophical discipline developed over the centuries, mainly by philosophers. After WWII, during the doctor’s trial in Nuremberg, doctors were judged for using war prisoners for experiments without their consent. All of them were sentenced as guilty. Situations like this founded “applied ethics” which are codes or guidelines that help professionals behave according to what is expected.

If we want to specifically trace the origins of ethics in AI, we must talk about Isaac Asimov’s famous Three Laws of Robotics (Runaround, 1942):

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

If you read Asimov, you know how his stories revolve around how these simple laws can be conflictive. A lot has changed since he wrote this, but the need to regulate machine behavior remains. So, what are the main concerns today? How can companies benefit from applying ethics in AI systems?

Does AI have ethics?

No matter whether you are trying to provide a better recommendation system or creating a chatbot for hiring new employees. Your objective should be to provide the best service to your customers. But what does better mean? Should it just be faster no matter what? Should it run autonomously and decide what is better for its users?

Of course, the need for a special ethic in each AI domain is crucial. It is not going to be the same principles while designing autonomous vehicles as when we are dealing with cybersecurity. Each sector will focus on different aspects, but we can also make sure that some of them will be common.

If we look at the history of ethics, there is one principle that has been followed since the very beginning, the no harm principle. The Hippocratic oath has been followed by doctors for over two hundred years. And even with updates, not harming directly or indirectly remains a top priority.

This aspect is key. We should be careful while we try to do things better, otherwise we could end up harming others and ourselves. Imagine you have a chatbot spreading misinformation. It won’t only hurt people who receive that information but also the company’s reputation.
As has been said hundreds of times before, there is no moral in technology itself. We must adapt discoveries to our social values.

What are the main ethical concerns today?

There are four major concerns when applying AI technology for use in business.

Bias in AI:

Biased data has a huge impact on AI training. We know that the system is going to be as good as the data used in its training. Stereotypic data can target certain groups as less trustworthy making it more complicated to access personal credits, etc.

As an example, we have face recognition systems that perform better if the user is male and white. This is a well-known problem and has been widely covered. In future posts, we will cover some practices to help you avoid these situations.

Transparency of AI Systems:

Many AI systems rely on Machine Learning algorithms that extract patterns from a given dataset during training. On top of that, these patterns can change with every update the system gets, so even if we find an explanation for a pattern, it can change or get outdated when new data arrives. This means that patterns are often hidden from the programmers or, in other words, not transparent. As I mentioned, the biased data on the system can remain hidden from the users and enforce it.

Privacy and surveillance:

This problem has been well-known for the last decade. Since the term Big Data was everywhere, people started paying attention to the data they were given free on the internet. I’ve seen this on close friends that went from “I have nothing to hide” to “I only turn on my location when I’m using my GPS”. Technology has changed significantly in the last decade, but the regulation responded slowly. The same is going to happen to AI.

“Free” services like social networks are well known for exploiting the weaknesses of their users (procrastination, addiction, manipulation, etc.) for the extraction of personal data and maintaining their attention. We all know that big companies exploit this situation, sometimes hiding this fact. The occupation of “influencer” does not get the name for anything else than that.

AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. Among other problems, personal privacy and surveillance can be at risk with technologies that include device fingerprinting, face recognition, etc., and can lead to identification and search for individuals way more easily. These systems will often reveal facts about us that we wish to suppress or are not aware of like where have we been by making use of the GPS without notice.

Manipulation of behavior:

As I was explaining, AI problems in surveillance go further in the accumulation of data. It is quite common that the collected data is used to retain users’ attention, which not only generates an addiction to their service but also manipulates the decisions that the users make online and offline.

Misinformation is one of the main concerns that online platform users have. It is well known the case of Cambridge Analytica, which used social media as a propaganda platform to alter the vote of thousands of Americans in 2016. Of course, a lot has changed since then, but so has the sophistication of these systems. Today we deal with deep fake photos, videos, and phone calls so real that we have to include fact-checking as part of our daily routine.

What are governments doing?

There are different ways to legislate a technology that is supposed to change everything. On one side we have the US vision, which doesn’t put almost any obstacle in the development. On the other hand, there are countries like France or Germany that share the Europe vision to legislate AI more restrictively. Even the Vatican has its manual.You can check the AI Act, created by the EU, which defines some act guides depending on the risk the system supposes. “Unacceptable risks” like social scoring or cognitive behavioral manipulation of people will be banned, for “limited risk” the AI system should comply with minimal transparency, allow the user to make informed decisions, etc. You can read the whole document here.

In October 2023, during the G7, Japan presented a different approach named the Hiroshima AI Process. This approach is a point in between the other two, but it’s enough specific to focus on problems like copyright or the revelation of personal data. Some of the defined rules include:

  • Before and during deployment: identify, evaluate, and migrate risks.
  • Mitigate vulnerabilities and patterns of misuse.
  • Transparency about limitations and/or inappropriate use.
  • Share information responsibly with other organizations.
  • Implement measures to protect personal data and intellectual property.

Conclusions

Although AI is a relatively new tool, ethics for AI proposes an extension of data management in terms of ethics and legislation. If we do not consider AI systems as moral agents, and we still do not, the treatment of data that we should adhere to for our applications extends to the usage and regulations currently in place for GDPR data management.

Questions like who is collecting the data, why it is being collected, how long it will be stored, or with whom it will be shared, must be answered before using it. Let’s say that you are using a chatbot that talks with your customers. If you want to store the conversations your users have with the chatbot, you should know how to handle that data, and maybe treat it as if it’s sensitive content, inform the users, and even rethink why your company needs that data in the first place.

Trying to set up some general rules or guidance is a complicated task, and as we saw with earlier technologies, general law restrictions come later than expected and sometimes even get outdated when published.

In the same way that we do not rely on government legislation to make our workspace safer or better for the environment, we must start doing this also to create better AI systems. Assume that you will be responsible for your AI system, so it will be your responsibility to double-check if your data is biased, collect just the relevant information, and most importantly, not harm its users.
If we as humans do not implement ethics within our AI designs, they will certainly not do it on their own, AI doesn’t know how to, if we don’t teach them.

Meer weten over wat wij doen?

We denken graag met je mee. Stuur ons een bericht.