Artificial intelligence (AI) has been the subject of controversy for decades. Some have questioned the its purpose, while sci-fi movies have portrayed another fear: that AI will take over the world. Even some leaders in the tech industry are beginning to express concern about the power of AI. Although AI has significantly advanced digitization and improved global markets and various industries, many believe that the technology is too complex and sophisticated, and its progression should be checked by regulation.

Increasing public concern around AI technology and the growing cry for AI regulation from organizations and industry leaders led to discussions on the ethics of AI and the need for regulation at the 40th International Conference of Data Protection and Privacy Commissioners in Brussels last month. On October 23, 2018, these discussions culminated in two important documents: the Universal Guidelines for Artificial Intelligence (UGAI) and the Declaration on Ethics and Data Protection in Artificial Intelligence (“the Declaration”), which were set forth by the French and Italian data protection authorities (respectively, the CNIL and the Garante). Both lay out principles and guidelines for data protection as AI continues to develop. This post looks at the 12 guidelines established in the UGAI and their potential impact on businesses that develop or use AI technology.


The Purpose of the UGAI

AI uses complex algorithms to complement and augment human capabilities. We use AI every day, in tools like: 

  • Health applications that can assist patients with taking their medication and help doctors read X-rays
  • Tools that analyze huge amounts of data and identify fraudulent activities for financial institutions
  • Virtual assistants (e.g., Siri for Apple users) that answer questions, make recommendations, and automate routine tasks

The reality is that AI can be found everywhere and used to automate tasks at a quicker, more efficient pace. The growth and popularity of AI has led some to worry that AI will take over the world, but it is important to understand that current AI technology is still relatively limited. Most current AI technology is programmed to perform one or a few clearly defined tasks at a time, often with the oversight of a human being.

The main concerns of regulators are not that AI will rule the world, but that the companies developing such technology are not held accountable. In an effort to address these concerns, the UGAI has established two key objectives:

  1. To ensure that the further development of AI remains consistent with its original purpose and continues to respect fundamental human rights.
  2. To consider the impact that further AI development may have on individuals and society.

In other words, the UGAI was developed to inform the public and improve the design and use of AI while maximizing its benefits and mitigating risk to further protect individuals and their rights. 


The 12 Guidelines of the UGAI

  1. Right to Transparency. All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, logic, and techniques that produced the outcome.
  2. Right to Human Determination. All individuals have the right to a final determination made by a person.
  3. Identification Obligation. The institution responsible for an AI system must be made known to the public.
  4. Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
  5. Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.
  6. Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
  7. Data Quality Obligation. Institutions must establish data provenance and assure quality and relevance for the data input into algorithms.
  8. Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices and implement safety controls.
  9. Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
  10. Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
  11. Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
  12. Termination Obligation. An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.


What is an Organization's Responsibility?

The UGAI applies specifically to the institutions that fund, develop, and deploy AI technology. It is not intended to chill innovation but rather to provide a tool to promote and enhance collaboration among these organizations for the further development of best practices concerning AI systems. The UGAI provides industry leaders with a foundation upon which further parameters can be built to address the concerns surrounding AI.

Organizations that develop AI should not only heed the UGAI but also focus on improving awareness around AI's purpose and benefits. One resource for raising awareness is the Partnership on Artificial Intelligence to Benefit People and Society, established in 2016 by a coalition of tech giants. This coalition works to promote the fair and ethical development of AI technologies through the formulation of best practices, the advancement of the public’s understanding of AI, and “an open platform for discussion and engagement about AI and its influences on people and society.”

Organizations developing and/or using AI technology could also establish an ethics committee to manage a governance process that promotes transparency around the development, implementation, and use of this technology.


Regulating Artificial Intelligence

The UGAI is a good first step in regulating a technology like AI. To some, it may seem too broad, but that is by design, so it can adequately address the variety of organizations developing and using AI technology.

Currently, there are both challenges and opportunities concerning AI, as regulations struggle to keep up with the fast pace of technology. Because of the many differing views on AI regulation, it is important that regulators and industry leaders work together to find a balanced approach in building a foundation of transparency and clarity for the betterment of humanity.




Stay On Top of Global Privacy Trends

Subscribe to Focal Point's Privacy Pulse below - a once-a-month newsletter with guides, webinars, interesting white papers, and news all focused on data privacy. You can unsubscribe at any time.