When Self-Humanization Leads to Algorithm Aversion What Users Want from Decision Support Systems on Prosocial Microlending Platforms
Pascal Oliver Heßler, Jella Pfeiffer, Sebastian Hafenbrädl
This study investigates why people often reject algorithmic advice, specifically focusing on prosocial (e.g., charitable) versus for-profit decisions on microlending platforms. Using an online experiment, the research examines how the decision-making context affects users' aversion to algorithms and their preference for more human-like decision support systems.
Problem
While algorithmic decision support systems are powerful tools, many users are averse to using them in certain situations, which reduces their adoption and effectiveness. This study addresses the gap in understanding why this 'algorithm aversion' occurs by exploring how the desire to feel human in prosocial contexts, where empathy and autonomy are valued, influences user preferences for decision support.
Outcome
- In prosocial contexts, like charitable microlending, people place a higher importance on human-like attributes such as empathy and autonomy compared to for-profit contexts. - This increased focus on empathy and autonomy leads to a greater aversion to using computer-based algorithms for decision support. - Users who are more averse to algorithms show a stronger preference for decision support systems that seem more human-like. - Consequently, users on prosocial platforms prefer more human-like decision support than users on for-profit platforms, suggesting that systems should be designed differently depending on their purpose.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating question: why do we sometimes resist help from A.I., even when it’s designed to make our lives easier? We’ll be exploring a study titled, "When Self-Humanization Leads to Algorithm Aversion What Users Want from Decision Support Systems on Prosocial Microlending Platforms." Host: In short, the study looks at why people often reject A.I. advice, particularly when making charitable decisions versus for-profit ones. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, companies are pouring billions into A.I. decision support systems. What's the big, real-world problem this study is tackling? Expert: The problem is that despite how powerful these systems are, user adoption is often surprisingly low. There's a well-documented phenomenon called 'algorithm aversion', where people simply prefer human advice over an algorithm's, even if the algorithm is more accurate. Host: So we’re building these amazing tools, but people aren’t using them? Expert: Exactly. And this study digs into a key reason why. It's not just about a lack of trust in A.I. It’s about our own psychology. The researchers propose that in certain situations, we have a deep-seated need to see ourselves as fully human—a concept they call 'self-humanization'. Host: Self-humanization. Tell us more about that. Expert: It’s the idea that we value uniquely human traits like empathy, emotional responsiveness, and the freedom to choose—what the study calls autonomy. When we're making a decision that feels deeply personal or moral, like donating to a charity, we want to exercise those human muscles. We don't see algorithms as having empathy, so we push them away. Host: That’s a powerful idea. So how did the researchers actually test this? Expert: They ran a clever online experiment. They created two simulated microlending platforms and randomly assigned participants to one of them. Expert: One platform was 'prosocial', where you lend money to entrepreneurs in need, like a charity, with no interest. The other was 'for-profit', where the goal was to earn money on your loan. The core decision was the same—who to lend money to—but the context was completely different. Host: Prosocial versus for-profit. I can already see how my mindset would shift. What were the key findings from this experiment? Expert: The findings were very clear and supported their theory perfectly. First, in the prosocial, or charitable, context, people placed a much higher importance on empathy and their own autonomy in making the decision. Host: So when we're giving to a cause, we want to feel that connection and be in the driver's seat, emotionally. Expert: Precisely. And that directly led to the second finding: this focus on empathy and autonomy created a much higher aversion to using an algorithm for advice. People in the charitable setting were more likely to reject A.I. help. Host: What did that mean for the kind of support they actually wanted? Expert: That’s the third key finding. The more averse a person was to a standard algorithm, the more they preferred a decision support system that seemed human-like. When forced to use A.I., they wanted one that could act more like a person. Host: Which brings it all together, I imagine. Expert: Yes. The final outcome was that users on the charitable platform had a significantly stronger preference for human-like A.I. assistants than users on the for-profit platform. It proves the context of the decision dramatically changes what we want from our technology. Host: This is where it gets really interesting for our listeners. Alex, what are the crucial business takeaways here? What should leaders be thinking about? Expert: The biggest takeaway is that context is king. You cannot build a one-size-fits-all A.I. assistant. The design of your A.I. must match the user's motivation. Host: So a tool for a non-profit should look and feel different from a tool for a financial firm. Expert: Absolutely. For any platform with a prosocial mission—charities, crowdfunding for a cause, even corporate volunteering platforms—the A.I. needs to be humanized. This isn’t just about a friendly avatar. It means using natural language, showing warmth, and acknowledging the user's autonomy. This is the remedy for algorithm aversion in these contexts. Host: And for the for-profit world? Expert: There, the user's desire to feel human is less pronounced. The motivation is profit. So the A.I.'s design should likely focus more on what we traditionally expect: performance, data, speed, and accuracy. Empathy is less of a factor. It highlights that A.I. adoption isn't just a tech challenge; it’s a human psychology and user experience challenge. Host: So, to wrap up, it seems the secret to getting people to embrace A.I. is to understand their underlying goal. If the task is about helping others, the A.I. needs to feel more like a partner than a machine. Expert: That's the core message. Match the A.I.'s perceived personality to the user's purpose, and you’ll bridge the gap between human nature and machine intelligence. Host: A powerful insight for any business deploying A.I. today. Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.