top of page
Search
  • Writer's pictureKelvina Wairimu

The Ethics of AI - The Diversity Issue

There are several industries that are currently impacted by Artificial Intelligence (AI) and/or have embraced the opportunities AI affords and the trend is global. With the rise of AI, we are facing many ethical issues and discussions concerning AI. As an example, we have been witnessing how bad data that contain biases are used in AI systems to predict outcomes that have been deployed in the real world.




While we are focused on developing and using the next new thing in AI, we need to have a look at the implications the technology may cause as a consequence of its use.


UNESCO just wrapped up its 40th Session of the General Conference in Paris. One of the topics UNESCO decided to take on at the conference, was the Ethics of AI and therefore I was invited to talk about the ethical issues of AI together with 3 other young minds. My main focus was the gender and diversity issue of AI and how it is currently and may impact us in the future.


There are several ethical concerns when it comes to AI but I have decided to highlight the topic I discussed at the General Conference below.


What is the gender and diversity issue of AI?

Currently, the makeup of individuals working in the AI industry is not reflective of how the real world looks like. In 2017, 71% of applicants for AI roles in the United States were men (Global AI Talent Report, 2019) and according to The World Economic Forum, only 22% of AI professionals globally were women and there has been no significant improvement recently (Weforum, 2019).

The lack of diversity in AI creates a risk for AI systems to perpetuate existing forms of structural inequality.

Resource & Development teams that lack diversity will not be substantially sensitive to the risk of the technologies that they develop for (other) socially vulnerable groups (Artificial Intelligence in Society by OECD, 2019). According to the New York University Research Center, the lack of diversity within the AI field has already contributed to flawed systems that perpetuate gender, racial biases and historical power imbalances (The Guardian, 2019).


"Black box machine-learning models are already having a major impact on some people’s lives. A system called COMPAS, made by Northpointe, offers to predict defendants’ likelihood of reoffending, and is used by some judges to determine whether an inmate is granted parole. The workings of COMPAS are kept secret, but an investigation by ProPublica found evidence that the model may be biased against minorities" (www.technologyreview.com, 2017)


There are many more examples of AI replicating or perpetuating biases, including offensive classifications of people of color, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The flawed systems have been charged to the lack of diversity within the individuals working in the industry and the data used by the AI industry.


Another issue is, that as more money and resources are invested into AI, the industry risks entrenching social divides and intensifying social inequality between the genders and among other marginalized groups, making low- and middle-income countries more vulnerable to negative social impacts of AI, and less likely to benefit from a positive outcome.


The lack of diversity concentrates an increasingly large amount of power and capital in the hands of a select subset of people.

How do we tackle the lack of diversity in AI?

If we increase the diversity of the workforce that is developing and working with AI systems, we can also reduce the risk of AI systems' generating discriminatory and unfair outcomes.

Here are some methods we can deploy in order to deal with the lack of diversity in AI:


  • We need to understand the diversity issue of AI, in order to tackle the issue head-on we need to understand the diversity issue in-depth.

  • Promote the involvement of all genders in STEM (Science Technology Engineering and Mathematics) subjects in order to create a larger talent pool to work with AI from an early stage (in the educational system).

  • Eradicate biases that hold women and minorities back from working in the field (from education to the workplace).

  • Hold companies accountable for their use of AI by ensuring that they have an incremental strategy towards diversity and inclusion in their datasets, hiring talent and throughout the work culture e.g. by employing frameworks to mitigate AI bias and addressing the way AI systems are built.


Beyond increasing the diversity and making the AI an inclusive industry, we also need to look at how we can ensure that the data used in AI is free of any biases and flawed data. IBM has already created an AI fairness toolkit that is available for individuals and companies to use, in detecting problems of bias through fairness metrics and bias mitigators in Machine Learning models and datasets.


Resources for detecting biases will assist in solving the bias issue, but it won't be adequate when it comes to creating a more diverse workforce in AI.

It is important to remember that as long as there is a lack of diversity in the AI industry, we have a lot of work to do. And even when we have reached a point where diversity is increasing, we still have a lot of work to do when it comes to ensuring that diversity and inclusion is a priority.


This is merely one of many ethical issues that are linked to AI if you are interested in reading more about ethical issues tied to AI please take a look at the preliminary study by UNESCO's World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) (https://unesdoc.unesco.org/ark:/48223/pf0000367823);



0 comments
Indlæg: Blog2 Post
bottom of page