Image of photo which is enhanced by artificial intelligence, of Olivia P. Walker used as featured image for OWB Public Affairs Article titled '4 things you must know about facial recognition technology' article. Photo also used for Olivia P Walker Affairs Youtube video Image

Facial Recognition Tech: 4 Things You Must Know

Updated June 18

On May 22 and June 4, Congress held hearings on facial recognition technology. The Congressional hearings examine the use of facial recognition technology by federal, state, and local government agencies, corporations and social media companies. Specifically, lawmakers address privacy and civil rights concerns.

Facial Recognition Tech: 4 Things You Must Know

Image of photo which is enhanced by artificial intelligence, of Olivia P. Walker used as featured image for OWB Public Affairs Article titled '4 things you must know about facial recognition technology' article. Photo also used for Olivia P Walker Affairs Youtube video Image
Click here to watch the Hearing on C-SPAN
  1. Facial recognition technology programs cannot accurately identify people of color and women.
  2. Facial recognition technology is not regulated by government.
  3. Law enforcement agencies and corporations use facial recognition technology without our consent.
  4. Facial recognition technology contributes to employment discrimination — against people over the age of 40, women and minorities — and to disparities in health insurance premiums.

All information provided is verifiable, for example: the videos and resources provided (and the sources cited in the resources provided) confirm all statements.

Facial Recognition Technology Resources

  1. Read Written Testimony: [United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) – Its Impact on our Civil Rights and Liberties by Joy Buolamwini, Founder, Algorithmic Justice League]
  2. Read Written Testimony: [United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) – Its Impact on our Civil Rights and Liberties by Professor Andrew Guthrie Ferguson, University of the District of Columbia, David A. Clarke School of Law]
  3. Read Written Testimony: [United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) – Its Impact on our Civil Rights and Liberties by Dr. Cedric Alexander, Former President, National Organization of Black Law Enforcement Executives]
  4. Read Written Testimony: [United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) – Its Impact on our Civil Rights and Liberties Ms. Clare Garvie , Senior Associate, Georgetown University Law Center, Center on Privacy & Technology]
  5. Read Written Testimony: [United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) – Its Impact on our Civil Rights and Liberties by Ms. Neema Singh Guliani, Senior Legislative Counsel, American Civil Liberties Union]

Video Resources: Facial Recognition Tech

  1. Watch Video: [ by United States House Committee on Oversight and Government Reform, Re: Facial Recognition Technology (Part 1) Its Impact on our Civil rights and Liberties on YouTube]
  2. Watch Video on C-SPAN: [The House Oversight and Reform Committee held a hearing to examine the use of facial recognition technology (Part 2) by the government and commercial entities and its impact on civil rights and liberties. Witnesses discussed the flaws in the technology, including programs that could not accurately identify people of color and women. Other concerns raised were the lack of regulation and oversight in the technology, how law enforcement is using facial recognition, fears of racial profiling, and the privacy issues surrounding Facebook, Uber, and Amazon’s use of the technology]

About the Author

Olivia P. Walker is an award winning public affairs and administration professional. She launched O.W.B Public Affairs and writes all site content. She previously consulted for the International Society of Pharmaceutical Engineering and served as government affairs and public policy analyst at WellCare Health Plans.  Olivia is a fusion belly dancer and a member of the American Society for Public Affairs and Administration’s section on public law and administration.

Read More Tech Articles on O.W.B Public Affairs

[Watch FaceBook, Google Executives Testify Live on the Rise of White Nationalism Across Social Media]

[Examining The Risks of Artificial Intelligence]

[Drug Pricing: Tech Start-Up Provides Solutions for Drug Manufacturers Subject to 340B Duplicate Discounts]

Share Article!

Examining the Risks of Artificial Intelligence

Technological advances have changed the way people work, consume information, and live. Innovations in technology gave impetus to Artificial Intelligence (AI). In recent years, AI has generated controversy. Proponents cite AI’s track record of improving operational efficiency, enhancing target identification efforts, and  increasing productivity through automation and big data analytics. Conversely, AI opponents blame automation for fueling unemployment and rising inequality; some have even suggested AI is a threat to humanity. AI is not a threat to humanity; the algorithms and programmers’ methods for developing AI technology and the government’s inability to regulate AI are threats to humanity.

AI is not a threat to humanity. The algorithms and programmers’ methods for developing AI technology, and the government’s inability to regulate AI are threats to humanity .

What is Artificial Intelligence?

Artificial intelligence refers to computational systems and programming models that enable machines to acquire and apply knowledge in a way before exclusive to humans.

The Risks of Artificial Intelligence

AI is under-regulated. Accordingly, human bias, Heuristic programming, algorithmic bias, and the government’s inability to keep pace with technological innovations, a prerequisite to meaningful legislation and regulation,  are threats to humanity, not AI. AI is as ‘intelligent’ and ethical as the programmers developing it — and the government actors responsible for its oversight. Meaning, the extent to which AI poses a threat to humanity is dependent upon human behavior during the research, development, and evaluation phases of AI technology.

Examining Risks of Artificial intelligence post. AI is a risk for women, minorities, and job seekers.
AI is a risk for the poor, women, minorities and job seekers.

Risk Mitigation and the Government’s Role 

While strategies to mitigate algorithmic bias are available, they are rarely used. As published by the MIT Technology Review, research indicates “that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias.” This is problematic. This is problematic because:

  • Employers, courts, banks, insurance companies, immigration professionals, police officers, and educational institutions use AI technology;
  • In these instances, AI technology determines  parole eligibility, credit worthiness, insurance premiums, teacher quality, and to make hiring decisions;
  • AI is increasingly identified  as prejudicial with regard to job seekers over the age of 40 years old, people of color, and the poor.

Suggestions for Readers

First, I recommend you conduct a Google search on AI bias. You will find many ways in which AI technology might be harmful.  Second, visit CSPAN. You can find and watch Congressional hearings focused on AI there. To be clear, there are societal benefits associated with AI technology. Nevertheless, the risks of AI cannot be ignored.

Recommendations

Industry and government must work together to create:

  • Strategies to mitigate bias and assure integrity in AI development processes;
  • AI program monitoring and evaluation techniques; and
  • Meaningful and effective legislation and regulation (within reason).
AI is not a threat to humanity. The algorithms and programmers’ methods for developing AI technology, and the government’s inability to regulate AI are threats to humanity

Proponents of AI cite its track record of improving operational efficiency, enhancing target identification efforts, and increasing productivity through automation and big data analytics. Conversely, AI opponents blame automation for fueling unemployment and rising inequality; some have even suggested AI is a threat to humanity. AI is not a threat to humanity. The algorithms and programmers’ methods for developing AI technology and the government’s inability to regulate AI are threats to humanity.

Importantly, the extent to which AI poses a threat to humanity is dependent upon human behavior during the research, development, and evaluation phases of AI technology. To be sure, there are societal benefits associated with AI technology. Nevertheless, the risks of AI cannot be ignored. Industry and government must work together to mitigate the risks associated with AI.

About the Author

O.W.B Public Affairs Digest Home Image Meet Olivia P. WalkerOlivia P. Walker is a public affairs strategist, campaign consultant, and writer. Most recently, Olivia served as governance consultant for the International Society for Pharmaceutical Engineering and worked as government affairs and public policy analyst for WellCare Health Plans, a Fortune 500 health insurer. Olivia holds a master’s degree in public administration from the University of South Florida School of Public Affairs. In 2016, Olivia was duly initiated into Pi Alpha Alpha, the Global Honor Society for Public Affairs and Administration. She is a member of the American Society for Public Administration and a member of the ASPA Section on Public Law and Administration. Olivia also holds a Graduate Certificate in Globalization Studies. The certificate is a specialized graduate-level credential reflecting knowledge of the most up-to-date research on globalization.


Home|Videos|Magazine