• No products in the cart.

How do we build trust in machine learning models?

Authors: Ernesto Lee

Publication date: 2021/4/5

Journal: Available at SSRN 3822437

Description:

Artificial intelligence (AI) systems and machine learning algorithms are rapidly being used in both the private and public sectors to simplify basic and complex decision-making processes. Most economic sectors, including transportation, retail, advertising, and electricity, are being disrupted by data digitization on a large scale, as well as the emerging technologies that use it. Computerized systems are being implemented to increase precision and drive objectivity in government operations, and AI is having an effect on democracy and governance.

Computers have made it simple to extract new insights thanks to the availability of large data sets. As a result, algorithms have evolved into more complex and ubiquitous methods for automated decision-making. Algorithms are a series of step-by-step instructions that computers obey to complete a task. Hiring, advertisement, criminal punishment, and lending decisions were all made by humans and organizations in the pre-algorithm era. These decisions were often regulated by federal, state, and local laws that set standards for justice, openness, and equality in decision-making (Lee, Resnick, & Barton, 2019). Today, some of these decisions are taken or influenced entirely by computers, whose size and statistical rigor promise previously unheard-of efficiencies. Algorithms are using large amounts of macro-and micro-data to influence decisions affecting people in a variety of activities, ranging from movie recommendations to assisting banks in determining a person’s creditworthiness. Algorithms in supervised machine learning depend on multiple data sets, or training data, that specify the correct outputs …

 

Download

 

Scholar articles
January 5, 2022
© 2021 Ernesto.  All rights reserved.  
X