News, Technology

AI ethics must be a continuous practice, says a director at the Oxford Internet Institute

Avatar Written by Hamnah Khalid · 1 min read>

Over the past two decade or so, we have seen a dramatic shift of emphasis in technology; everybody wants AI. People want everything from their personal computers to their washing machines to be Artificially Intelligent. The simple idea that a machine can learn from what it has been told and then make better decisions and judgements baffles humanity; not to mention the bottomless list of applications for this kind of tech.

But, with great power, unfortunately, comes great responsibility. These AI-powered bots, on the one hand, possess the ability to make our lives easier and more efficient. On the other hand, however, they pose many real and serious safety and security risks.

Mariarosaria Taddeo, the Deputy Director of the Oxford Internet Institute’s Digital Ethics Lab suggests following various ethical principles to avoid these risks altogether.

In an interview with thenextweb, Taddeo mentioned that these technologies are “transformative.” They are reshaping our societies and the reality in which we live, so, we need to make sure that this transformation is leading to the societies we want: a post-AI society, which is democratic, open, and diverse. To achieve these ends, it is essential that ethical considerations are leading us down the right route. We cannot leave it too late.

Taddeo, together with her team at the Digital Ethics Lab, helps come up with guidelines on how to ethically develop, implement, and deploy technologies like AI. She believes that these guidelines and rules can help steer these advancements in the right direction.

When we think of digital technologies, we cannot disregard their social impact, with respect to the ethical values and principles that underpin our societies. If there is friction between these values and principles and technological innovation, the latter will not be adopted and it is also likely that this friction will lead to strict policies and regulation.

In turn, this can hinder innovation. Ethics, when embraced at the beginning of any design process, can help us to avoid this path, limit risk, and to make sure that we foster the ‘right’ innovation.”

Sounds simple enough, doesn’t it? There are a set of ethical rules that you must follow with the implementation of each technology.

The process gets trickier and harder to judge when a change in technology or an increment in the evolution of a specific type of technology comes along, and given the speed at which technological advancements are taking place in this modern era, this problem arises a lot more often than you would think.

This is where continuous reassessment of these rules and guidelines becomes necessary.

The new changes, Toddeo says, “may pose new ethical risks or new ethical questions that need to be addressed.”

Trust and forget about AI is a dangerous approach. The use of AI should be coupled with forms of monitoring and control of the system. More than trustworthy AI, we should develop processes to make AI reliable.

The Organisation for Economic Co-operation and Development’s Principles on Artificial Intelligence, Taddeo believes, handle the need for continuous monitoring of AI systems very well. They make sure that the AI system not only behaves accordingly when deployed but also throughout its lifecycle.

“Ethics — especially digital ethics — should be seen more as a practice than as a thing,” believes Taddeo, when it comes to Artificially intelligent machines.

What do you think?