Suppose you have been shown that the artificial intelligence tool offers accurate predictions about some of the stocks you have. How do you feel about using it? Now, suppose you are applying for a job in a company where the HR department uses the AI system for screen resume. You are comfortable with it?
A new study has found that people are neither enthusiastic nor A.I. Instead of falling into camps of Techno-Optim PTMists and Ludites, people are understanding the practical deformed shot t of AI, a case.
“We have proposed that AI is considered to be more capable than humans and is considered unnecessary in terms of a given decision,” says Jackson Lu, co-author of the newly published paper, details of the results of the study. “AI dislike when none of these conditions are met, and AI appreciates only when both conditions are satisfied.”
Paper, “AI dislike or appreciated? Capacity-Personalization Framework and Meta-Analytic Review,” appears in ” Mental bulletin. The paper has eight co-authors, including LU, who is a co-operative professor of career development and organization studies in MIT Slon School Management Fo Management.
The new structure will add insights
People’s reactions to AI have long been subject to widespread debate, often producing specific conclusions. The impressive 2015 paper on the “algorithm” reveals that people forgive AI-generated errors than human mistakes, while the widely registered 2019 paper on the “Praise of the algorithm” found that people have chosen AI’s advice compared to the advice of men.
To settle these mixed findings, Lu and its co-authors conducted a meta-analysis of 163 previous studies, which compared people’s choices for humans against AI. Researchers tested that the data supported their proposed “ability – -expression structure” – in the idea, the requirement for AI’s alleged ability and individualization shapes our preferences for AI or humans.
Around 163 studies, the research team analyzed more than 82,000 reactions on 93 different “decision references” – for example, whether AI is used in a cancer diagnosis or not the participants feel comfortable. The analysis confirmed that the capacity – the individualization structure really helps to consider people’s preferences.
“Meta-analysis supported our theoretical structure,” says Lu. “Both parameters are important: individuals evaluate whether AI is more capable of in a given work, and whether or not the task is to be personalized. People will choose AI only if they think AI is more capable than humans and the task is ineffective.”
He adds: “The main idea here is that the highest perceived capacity does not guarantee AI appreciation. Personalization is also important.”
For example, when it comes to fraud or big datasets, people favor AI – AI’s capabilities are higher than humans, and personalization is not necessary. But they are more resistant to AI in references such as therapy, job interviews or medical diagnosis, where they feel that humans are more capable of identifying their unique circumstances.
“People have the fundamental desire to see themselves unique and different from others,” says Lu. “AI is often seen as a morally and functioning manner. Even if AI is trained on the wealth of data, people feel that AI cannot catch their personal situations. They want human recruitment, a human Doctor Cutter, who can see them apart from others.”
References are also important: from embodiment to unemployment
There are other factors that influence individuals’ preferences for AI. For example, AI appreciation for tangible robots is more clearer than abstract algorithms.
Economic context is also important. In countries with low unemployed, AI appreciation is more clear.
“It makes an intuitive sense,” says Lu. “If you are worried about replacing by AI, you are less likely to accept it.”
The LU continues to investigate the complex and developed attitude of the people towards AI. When it does not view current meta-analysis as the last word in this regard, it hopes that the capacity-personization structure provides a valuable lens to understand how people evaluate AI in different contexts.
LUA concludes.
In addition to LU, co-writers of paper are Zin Kin, Chen Chen, Hansen Zou, Ziovai Dong and Sun Yacht-Sen University; Ziang Zoo of Shenzhen University; And Donguan Wu of Fudan University.
The research, in part, was endorsed by Kin and Woo grant from China’s National Natural Science Foundation.