AI / Machine Learning
-
July 13, 2022

AI Governance - What is the Best Recommended Framework?

AI has moved from the research field to real-life applications and is continuing to evolve rapidly. This makes it challenging to make decisions on how it is going to be framed moving forward.

The power of AI is unlimited, and there are risks in the use of AI that might affect society. So, what are the conditions and rules around AI? How can AI be used and how should it not be used? Who is to blame if something goes wrong? 

The challenge here is to find a framework that establishes a balance between people, process, and technology, that lets engineers develop AI models in a context that protects society.

What do policymakers need to do to allow experts to innovate but also protect citizens? This framework should explain standards, principles on fairness, privacy concerns, as well as how to solve ethical challenges regarding bias in algorithms.

When AI is in our lives, regulations are needed, especially when AI is being used to make decisions that impact our everyday lives. This is why AI Governance is so important. Issues regarding data privacy and justice should be a continual discussion.

But, what are the morals and ethics behind this technology? 

it's clear that algorithmic bias is already present in machine learning algorithms. It has been disclosed in many ways, such as race, gender, age, etc. AI can help us decide on key areas such as hiring, promotions, loan applications, and selection for medical treatment.

Bias present in our society can be reflected in algorithms that are used to make these mentioned decisions. If we do not address the problem, we could exacerbate these inequalities in our society. 

Therefore, we need to determine how to act when AI is showing bias. Many organizations are working on solving this problem before it is too late. But, in this lies two main challenges, how to determine whether AI has been fair, and whether it is ethical or not? 

Come join us

Interested in working with AI technologies? We're hiring!

AI Governance is a work in progress

Several organizations are working on creating an AI Governance framework that provides good practices and principles for implementing AI Systems.

Here are some examples:

  • Derechos Digitales is Latin American non-profit organization whose objective is the development, defense, and promotion of human rights in the digital environment.

AI Governance Best practices

So, what exactly are the recommendations from these mentioned organizations? And, what do their frameworks look like?

Upon reflection, there isn’t a clear framework to follow, but each organization provides some considerations. Among the different sources are some patterns in the best practices that the framework contains. 

Artificial intelligence
Artificial intelligence

The most relevant recommendations where these organizations coincide are ensuring diversity and inclusiveness in the development, transparency, and explanation of the machine learning models, as well as increasing public awareness of AI technologies. 

Here is a summary of the most important suggestions: 

Diversity 

Improving diversity across different roles during development of algorithms to protect against bias. As well as diversity in the selected data to train the model and the testing data. 

Human involvement 

  • “Human in the loop” - this term refers to the intervention of humans to optimize the machine learning models.
  • Incorporating feedback in early phases - this involves users in the early phases to better understand them and detect problems.
  • Engage with a diverse set of users and use-case scenarios - this is where you test an ML model with a group of real people that represent the society. 

Analyzing data 

  • Carefully analyze data for different categories.
  • Keep a look out for outliers.  
  • Check the source and veracity of data.
  • Look at different metrics to see algorithm outcomes from different perspectives

Transparency 

  • Communicate any limitations associated with an algorithm.
  • Provide key explanations to the users that are clear, specific, relatable and actionable.
  • Apply traceability and document decisions made and data transformations, as well as having a audit log showing different steps processed.

Continuously monitoring 

  • Monitor and control the outcomes of the model.
  • Retrain, if necessary, when problems are detected.
  • Periodically update data if and when needed.
  • Control continual learning as models might behave in an unpredictable way.

Seek Government guidance

  • Governments and civil societies can prioritize competing factors in hypothetical situations.
  • Develop national guidance for local authorities to lead algorithmic decision-making ethically and then monitor it. 

Final thoughts on AI Governance

As highlighted throughout, we need effective AI Governance to create a framework that ensures the correct use of AI, as well as providing tools to understand what is happening when something goes wrong. When doing this, it's important to remain transparent and provide society and users of machine learning models with information on the limits of AI. 

At this time, there are certain people who are frightened of AI, and they have a right to be as they do not understand what is going on with the technology, and it's unclear what happens if the AI is wrong.

People are also apprehensive about what the future of AI might look like, especially with continued talk of it replacing the human aspect of certain jobs and industries.

It's clear that AI is here to stay as companies are discovering the needs to use it to keep up with the market. Therefore, we need to provide a safe environment based on principles to support the users and the society. We need to use AI as a tool, not as a replacement, as these decisions will be made by humans.