BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story

The Turing Point: Artificial Intelligence At The Nexus Of Morality, Bias, & Societal Change

Kinetica

Artificial intelligence is at a turning point. We are at a decision point where we must consider the implications of AI, or else risk creating a world where AI makes choices that impact all of us without first agreeing upon the terms of engagement.

There are several key areas of concern: morality, bias, and societal impact. AI as a technology has now begun to encroach into each of these domains.

The self-driving car is perhaps the most relatable example of moral concern. For starters, if a self-driving car finds itself in a situation where an accident is unavoidable, does it take measures to prioritize the safety of the person in the car, or does it protect the greatest number of people, avoiding a crowd of pedestrians but putting the passenger in grave danger? This is as much a philosophical question as it is a practical one, and in fact, reasonable people may disagree about the correct response.

But someone has to program the model that makes that call. Does that person share our personal moral viewpoint? Have they considered the moral question at all? Do we want that to be the job of a human engineer, let alone the AI that makes our self-driving cars possible? Whose job is it? Will the programmed moral compass of that vehicle be a black box, or an open book that we can review before we decide whether or not to take a ride? And if we find that self-driving cars are making moral choices we disagree with, who is to blame: the company, the engineer, the passenger, the government, or no one at all?

Thousands of years of philosophy have been devoted to questions about our moral code in the context of all-powerful beings that can make decisions—and needless to say, we don't have consensus.

Perhaps the most important overarching question is: who is making the decisions, and how do we track and assess those decisions to their logical ends?

Bias is another thorny issue. Recently, a University of Virginia computer science professor found that gender bias was passed on from human researchers to AI. "Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them," according to Wired. "If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association." A similar incident occurred in which Google Photo's AI labeled black people as gorillas.

At a company like Kinetica, premised on dynamic data analysis at sub-second speeds, we understand that data is nothing without analysis and associated action—and when a machine learning model is doing the analysis, it's important to understand that someone trained that model. AI is not a bias-free panacea, but rather the product of the biases that trained it.

Societal impact is another significant area of concern. AI technology will impact societies all across the globe, but our societies are diverse, and we have to expect that AI technology will be used differently from place to place.

Take facial recognition capabilities, for example. The EU may choose to regulate AI to limit its incredible facial recognition capability, in order to prevent deep fakes that convincingly show people doing or saying things they never had any part of. But even assuming they could control this technology, if another country chooses to do nothing, deep fakes could be produced there and people in the EU will still find them on the internet.

From sabotaging politicians to blackmailing celebrities to framing someone for a crime, this technology could be used with malintent. China may choose to use facial recognition for Minority Report-style surveillance while Norway uses it to unify your digital identity across government services, and once the technology is out there, we aren't necessarily able to be the arbiters of how it will be used.  

Perhaps the most important overarching question is: who is making the decisions, and how do we track and assess those decisions to their logical ends?

Singapore's Personal Data Protection Commission has begun to tackle this issue by creating the Model AI Governance Framework, a generalized, ready-to-use tool to establish AI responsibly. It is intended to assess and manage the risks of deploying AI within an organization. It provides a framework for companies to think through the implications of their AI technology.

Everyone has focused on training models, but few have thought about governance. What organizations need to start doing is establishing centralized places for managing a library of trained models, tracking the data set that was used to train the model. There should be straightforward, accessible disclosures around the logic behind the model, an ability to see where the model is deployed across the enterprise, and the ability to audit the lineage of the model’s decision-making.

Because questions of ethics are rarely black and white, the desired outcome is not to judge the ethical decisions as much as to offer an explanation for how the model got from decision point A to point Z, thus making it easier for people to assess whether or not these outcomes map to the ethical stance they want to take. It provides a framework for decision-makers to think about ethics as they implement models, and then track and evaluate the outcomes.

The World Economic Forum is working on an initiative to accomplish this, and companies including Kinetica are participating in order to ensure that the new realm of machine learning-enabled data platforms have the infrastructure to support governance, transparency, and repeatability.   

We think organizations like this are taking the right approach by trying to structure an approach now, before we're stuck playing catch up. AI is complex in its technology, possibility, and dangers, and it's incumbent upon us to be cognizant of that and to formulate a thoughtful approach.

Mistakes will be made. But they shouldn't be thoughtless ones. We hope that more organizations and companies will take the essential first step outlined by the World Economic Forum: putting AI's moral, bias, and societal implications on the radar of the board and company leadership and the incredible technologists building it. This is the best way to put AI on a path to making life for individuals and for societies better than it's ever been. The time is now.