Examining the Risks of Artificial Intelligence

Facebook's CEO testified before Congress last week following the Cambridge Analytica consumer data breach. Mark Zuckerberg's testimony warrants an examination of Artificial Intelligence.

Unidentified Risks Cannot Be Mitigated 

Defining Artificial Intelligence

Technological advances have changed the way people work, consume information, and live. Innovations in technology gave impetus to the development of Artificial Intelligence (AI), which can be defined as computational systems and programming models that enable machines to acquire and apply knowledge in a manner previously exclusive to humans.

The Risks of Artificial Intelligence

In recent years, AI has generated controversy. Proponents cite AI’s track record of improving target identification and marketing, operational efficiency, and productivity through automation and big data analytics. Conversely, AI opponents blame automation  for unemployment and rising inequality; some have even suggested AI is a threat to humanity.

Heuristic programming,algorithmic bias, and the government’s inability to keep pace with technological innovations ( which is a prerequisite to providing meaningful oversight) are threats to humanity, not AI.  AI is as ‘intelligent’ and ethical as the programmers developing it — and the government actors responsible for its oversight. Meaning, the extent to which AI poses a threat to humanity is dependent upon human behavior during the research, development, and evaluation of AI technology.

Risk Mitigation 

While strategies to mitigate algorithmic bias have been identified, they are rarely utilized. As published by the MIT Technology Review, research indicates “that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias.” Industry and government must work together to identify and mitigate risks associated with AI, and prioritize the development of:

  • Strategies to mitigate bias and assure integrity in AI development processes;
  • AI program monitoring and evaluation techniques; and
  • Meaningful and effective legislation and regulation (within reason).

pexels-photo-267482.jpeg

Click here to watch Zuckerberg testify before Congress.

 

 

 

[This is the first article of a multi-article series]