Diversity must be at the heart of equitable AI development

As artificial intelligence has become part of everyday life, developers need to ensure that the models it learns from accurately reflect the real world.

People often think of AI as just code – cold, lifeless and objective. In many ways, however, the AI ​​is more like a child. It learns from the data it is exposed to and optimizes based on the goals set by its developers, who in this analogy would be its “parents”.

Like a young child, the AI ​​doesn’t know the history or societal dynamics that have shaped the world as it is. And just as children sometimes make strange or inappropriate remarks without knowing more, AI naively learns patterns of the world without understanding the larger sociotechnical context that underlies the data from which it learns.

Unlike children, however, AI is increasingly being asked to make decisions in high-stakes settings, including finding criminal suspects, informing loan decisions, and assisting with medical diagnoses. For RN “parents” who want to ensure that their “children” don’t learn to reflect societal biases and act in a discriminatory way, it’s important to consider diversity throughout the process. of development.

Diversity impacts multiple stages of the AI ​​development lifecycle. First, diversity in training and evaluation data is essential for many tasks where the goal is to ensure that AI works well for people from different backgrounds. For example, the flagship article “Gender Shades” [PDF] highlighted how facial processing technologies may have lower accuracy rates for black women compared to other groups. Gender Shades and subsequent research have attributed these biases to the lack of sufficient diversity and representation in the datasets used to develop these technologies. This type of bias can also occur in humans: studies have shown that people have a harder time recognizing people of a different race. But psychological research has also shown that these biases are weaker if individuals have more contact with diverse people as they grow older.

It is not enough, however, to simply make dataset diversity a goal, and achieving it is not trivial in practice. As I explain in a forthcoming research paper, collecting sufficiently large and diverse datasets is very challenging, especially in sensitive contexts such as human-centric computer vision, where datasets Existing public data often suffer from diversity and confidentiality issues.

At Sony AI, we have several AI ethics initiatives focused specifically on ethical data collection. Our goal is to develop best practices and techniques that optimize fairness, confidentiality and transparency.

In addition to the diversity of datasets from which the AI ​​learns and “grows”, it is also essential to take into account the diversity of the “parents” of the developer. While AI products have an increasingly global reach, developers are overwhelmingly concentrated in a few countries. This is an important question to address given that the ethics and regulation of AI depend on values ​​and cultural contexts that differ between countries. A study of global moral preferences found, for example, that when faced with a “trolley problem” where an autonomous vehicle must decide whether to swerve, killing passers-by, or not to swerve, killing passengers, people all over the world had very different moral views. it depended on the demographics of the hypothetical spectators/passengers. Just as parents imbue their children with their own moral preferences, we must consider how culture-specific goals and values ​​might be reflected in the development of AI.

The level of diversity within companies where AI is developed and deployed is also important. Company culture plays a key role in enabling or hindering diversity in AI teams. It’s common for employers to blame diversity issues on pipeline issues – that there aren’t enough women or underrepresented minority students studying computer science or related fields in school. This focus on pipeline issues ignores the high attrition rates of women and minorities in tech roles. A recent study examining the reasons why women and minority people leave AI teams found that attrition is primarily due to toxic work environments, biased experiences, and lack of growth opportunities.

Having diverse AI “parents” is essential, because detecting potential problems in AI development requires an understanding of how the technology might interact with society in harmful ways. For example, the use of AI by law enforcement is extremely controversial in the United States due to the country’s history of biased policing. AI developers who study AI ethics in the United States are familiar with the failure modes of AI development attempts for US law enforcement, but each country has its own societal inequalities. which can be exacerbated by AI. Addressing this harm requires greater awareness and understanding of the contexts in which AI is deployed.

While it’s not an easy challenge, it’s one that I and my colleagues at Sony are taking on. Over the years, Sony has consistently been recognized as one of the “World’s Most Ethical Companies” for its longstanding commitment to responsible business practices. Diversity is one of Sony’s core values ​​given the global and multicultural nature of the company and its activities. Sony released its AI ethics guidelines [PDF] in 2018 and announced that it would review all of its AI products for ethical risks. Ethics by design is key to our approach, which includes AI ethics assessments at every stage of the AI ​​lifecycle, from ideation and design to development and deployment.

As AI becomes more integrated into our daily lives, tech companies and others developing AI need to think about how they “parent” it: what representations of the world they learn from and they reflect the values. To build a more fair and equitable AI development future, diversity must be at the heart of AI solutions.

Alice Xiang is Global Head of AI Ethics at Sony Group Corporation and Senior Researcher at Sony AI.

Sign up for the E&T News email to get great stories like this delivered to your inbox every day.