Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity
Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.
Problem
Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.
Outcome
- Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk. - Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive. - Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers. - Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility. - The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re looking at a critical issue that costs businesses billions: cybersecurity. But we're not talking about firewalls and encryption; we’re talking about people. Host: We're diving into a fascinating new study titled "Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity." It proposes a new strategy for communicating cyber threats, one that balances making employees aware of dangers with building their confidence to handle them. Host: Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. We invest so much in security technology, yet we keep hearing about massive, costly data breaches. What's the core problem this study addresses? Expert: The core problem is that despite all our advanced tech, the human element remains the weakest link. The study highlights that data breaches are not only increasing, they’re getting more expensive, averaging nearly 9.5 million dollars per incident in 2023. Host: Nine and a half million dollars. That’s staggering. Expert: It is. And the research points out that about 90% of all data breaches result from internal causes like simple employee error or negligence. So, the traditional approach—annual training videos and dense policy documents—clearly isn't working. We need a strategic shift. Host: So how did the researchers approach this? It sounds like a complex human problem. Expert: It is, and they took a very practical approach. They combined findings from their own prior experiments on how people react to threats with a series of in-depth interviews. They spoke directly with ten C-level executives—CISOs and CIOs—from major companies in healthcare, retail, and manufacturing. Host: So, this isn't just theory. They went looking for a reality check from leaders on the front lines. Expert: Exactly. They wanted to know what actually works in the real world when it comes to motivating employees to be more secure. Host: Let’s get to their findings. What was the most significant discovery? Expert: The biggest takeaway is the need for a delicate balance. Managers need to instill what the study calls a healthy 'survival fear'—an awareness of real threats—without causing panic or anxiety, which just makes people shut down. Host: 'Survival fear' is an interesting term. Can you explain that a bit more? Expert: Think of it like teaching a child not to touch a hot stove. You want them to have a healthy respect for the danger, not to be terrified of the kitchen. One executive described it as an "inverted U" relationship: too little fear leads to complacency, but too much leads to paralysis where employees are too scared to do their jobs. Host: So you make them aware of the threat, but then what? You can’t just leave them feeling anxious. Expert: And that’s the other half of the equation: building their confidence, or what the study calls 'efficacy.' It’s just as crucial to empower employees with the belief that they can actually identify and respond to a threat. Fear gets their attention, but confidence is what drives the right action. Host: What did the study find were the most effective tools for building that confidence? Expert: The executives universally praised interactive methods over passive ones. The most effective tool by far was phishing simulations. These are fake phishing emails sent to employees. When someone clicks, they get immediate, private feedback explaining what they missed. It's a safe way to learn from mistakes. Host: It sounds much more engaging than a PowerPoint presentation. Expert: Absolutely. Gamification, like leaderboards for spotting threats, also works well. The key is moving away from a culture of punishment and toward a culture of shared responsibility, where reporting a suspicious email is seen as a positive, helpful action. Host: This is the critical part for our listeners. Alex, what are the practical takeaways for a business leader who wants to strengthen their company's human firewall? Expert: There are three key actions. First, reframe your communication. Stop leading with fear and punishment. Instead, focus on empowerment. The goal is to instill that healthy ‘survival fear’ about the consequences, but immediately follow it with simple, clear actions employees can take to protect themselves and the company. Host: So, it's not "don't do this," but "here's how you can be a hero." Expert: Precisely. The second takeaway is to make security easy. The executives pointed to the success of simple tools, like a "report this email" button that takes just one click. If security is inconvenient, people will find ways around it. Remove the friction from doing the right thing. Host: And the third action? Expert: Make your training relevant and continuous. Ditch the generic, annual "check-the-box" training that employees just play in the background. Use those phishing simulations, create short, engaging content, and tailor it to different teams. The threats are constantly evolving, so your training has to as well. Host: So, to summarize, it seems the old model of just telling employees the rules is broken. Host: The new approach is a delicate balance: make people aware of the risks, but immediately empower them with the confidence and the simple tools they need to become an active line of defense. It's about culture, not just controls. Host: Alex, this has been incredibly insightful. Thank you for making this complex topic so clear. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we translate another key piece of research into actionable business strategy.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.
Problem
Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.
Outcome
- The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features. - The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer. - Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment. - Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research to real-world business strategy, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re exploring a topic that’s becoming increasingly relevant in our AI-driven world: how to make our digital tools not just smarter, but more supportive. We’re diving into a study titled "Design Knowledge for Virtual Learning Companions from a Value-centered Perspective".
Host: In simple terms, it's about creating AI-powered chatbots that act as true companions, helping students with the very human challenges of motivation and time management. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a fascinating study with huge implications.
Host: Let's start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: Well, think about anyone trying to learn something new while juggling a job and a personal life. It could be a university student working part-time or an employee trying to upskill. The biggest hurdles often aren't the course materials themselves, but staying motivated and managing time effectively.
Host: That’s a struggle many of our listeners can probably relate to.
Expert: Exactly. And while we have powerful AI tools like ChatGPT that can answer questions, they function like a know-it-all tutor. They provide information, but they don't provide companionship. They don't check in on you, encourage you when you're struggling, or help you plan your week. This study addresses that gap.
Host: So it's about making AI more of a partner than just a tool. How did the researchers go about figuring out how to build something like that?
Expert: They used a very hands-on approach called design science research. Instead of just theorizing, they went through multiple cycles of building and testing. They started by conducting in-depth interviews with working students to understand their real needs. Then, they held workshops, designed a couple of conceptual prototypes, and eventually built and coded a fully functional AI companion that they tested with different student groups.
Host: So it’s a methodology that’s really grounded in user feedback. What were the key findings? What did they learn from all this?
Expert: The main outcome is a powerful framework for designing these Virtual Learning Companions, or VLCs. The big idea is that the companion's value is created through the interaction itself, which they break down into three distinct but connected layers.
Host: Three layers. Can you walk us through them?
Expert: Of course. First is the Relationship Layer. This is all about creating a human-like, trustworthy companion. The AI should be able to show empathy, maybe use a bit of humor, and build a sense of connection with the user over time. It’s the foundation.
Host: Okay, so it’s about the personality and the bond. What's next?
Expert: The second is the Matching Layer. This is about adaptation and personalization. The study found that a one-size-fits-all approach fails. The VLC needs to adapt to the user's individual learning style, their personality, and even their current mood or context.
Host: And the third layer?
Expert: That's the Service Layer. This is where the more functional support comes in. It includes features for time management, like creating to-do lists and setting reminders, as well as providing supportive learning content and creating a motivational environment, perhaps with gentle nudges or rewards.
Host: This all sounds great in theory, but did they see it work in practice?
Expert: They did, and they also uncovered a critical insight. When they tested their prototype, they found that full-time university students thought the AI’s language was too informal and colloquial. But a group of working professionals in a continuing education program found the exact same AI to be too formal!
Host: Wow, that’s a direct confirmation of what you said about the Matching Layer. The companion has to be adaptable.
Expert: Precisely. It proves that to be effective, these tools must be tailored to their specific audience and context.
Host: Alex, this is the crucial part for our audience. Why does this matter for business? What are the practical takeaways?
Expert: The implications are huge, Anna, and they go way beyond the classroom. Think about corporate training and HR. Imagine a new employee getting an AI companion that doesn't just teach them software systems, but helps them manage the stress of their first month and checks in on their progress and motivation. That could have a massive impact on engagement and retention.
Host: I can see that. It’s a much more holistic approach to onboarding. Where else?
Expert: For any EdTech company, this framework is a blueprint for building more effective and engaging products. It's about moving from simple content delivery to creating a supportive learning ecosystem. But you can also apply these principles to customer-facing bots. An AI that can build a relationship and adapt to a customer's technical skill or frustration level will provide far better service and build long-term loyalty.
Host: So the key business takeaway is to shift our thinking.
Expert: Exactly. The value of AI in these roles isn't just in the functional task it completes, but in the supportive, adaptive relationship it builds with the user. It’s the difference between an automated tool and a true digital partner.
Host: A fantastic insight. So, to summarize: today's professionals face real challenges with motivation and time management. This study gives us a three-layer framework—Relationship, Matching, and Service—to build AI companions that truly help. For businesses, this opens up new possibilities in corporate training, EdTech, and even customer relations.
Host: Alex, thank you so much for translating this complex study into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in. This has been A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable knowledge for your business.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION
Stefan Seidel, Christoph J. Frick, Jan vom Brocke
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.
Problem
Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.
Outcome
- Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time. - This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation. - Elaboration involves specifying details and requirements to provide legal certainty and protect users. - Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: On today’s episode, we're diving into the complex world of regulation for new technologies. We’re looking at a study titled "REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION". Host: The study examines how a diverse group of people—legal experts, government officials, and industry leaders—came together to create laws for a new technology, using blockchain in Liechtenstein as a case study. Here to help us unpack this is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So Alex, let’s start with the big picture. What is the fundamental problem that governments and businesses face when a new technology like blockchain or A.I. emerges? Expert: It’s a classic case of trying to build the plane while you're flying it. Governments need to create rules to protect users and prevent harm, but they also want to avoid crushing innovation before it even gets off the ground. Host: The dreaded innovation killer. Expert: Exactly. The study highlights that this is incredibly difficult when no one fully understands the technology's potential or its risks. This creates what the authors call a "regulatory gap"—a gray area of uncertainty that can paralyze businesses. They don't know if their new business model is legal, so they hesitate to invest. Host: And how did the researchers in this study go about understanding this process? What was their approach? Expert: They conducted an in-depth case study in the European state of Liechtenstein. They essentially got a front-row seat to the entire law-making process for blockchain technology. Expert: They interviewed everyone involved—from the Prime Minister to tech startup CEOs to the financial regulators. They also analyzed hundreds of documents, including early strategy papers and evolving drafts of the law, to see how the thinking changed over time. Host: It sounds like they had incredible access. So, after all that observation, what were the key findings? What did they discover about how to create good regulation? Expert: The biggest finding is that it's a process of what they call 'collective prospective sensemaking'. That’s a fancy term for getting a diverse group of people in a room to build a shared vision of the future. It’s not about one person having the answer; it’s about creating it together. Host: And the study found this process hinges on two specific activities: 'abstraction' and 'elaboration'. Can you break those down for us? Expert: Of course. Think of 'abstraction' as zooming out. Initially, the group in Liechtenstein was focused on regulating "blockchain" and "cryptocurrency." But they realized that was too specific and would be outdated quickly. Expert: So, they abstracted. They asked, "What is the essential quality of this technology?" They landed on the idea of "trust." This allowed them to create a flexible, technology-neutral rule for any "trustworthy technology," not just blockchain. It future-proofed the law. Host: That’s a brilliant shift. So what about 'elaboration'? Expert: If abstraction is zooming out, 'elaboration' is zooming in. Once they had the big, abstract concept—trustworthy technology—they had to add the specific details. Expert: This meant defining roles, specifying requirements for service providers, and creating rules that would give businesses legal certainty and actually protect users. It's the process of giving the abstract idea real-world teeth. Host: So the target itself evolved dramatically through this process. Expert: It really did. They went from a narrow law about cryptocurrency to a broad, durable framework for what they called the "token economy." This was only possible because of that constant dance between the big-picture abstraction and the fine-detail elaboration. Host: This is fascinating, Alex, but let's get to the bottom line. Why does this study matter for business leaders listening right now, even if they aren't in the crypto space? Expert: This is the most crucial part. The study offers a powerful blueprint for how businesses should approach regulation for any emerging technology, whether it's A.I., quantum computing, or synthetic biology. Expert: The first takeaway is proactive engagement. Don't wait for regulation to happen *to* you. The industry leaders in this study who participated in the process helped shape a more innovation-friendly law. By being at the table, you can influence the outcome. Host: So get involved early and often. What else? Expert: Second, understand the power of language. The breakthrough in Liechtenstein happened when they shifted the conversation from a specific technology, blockchain, to a desired outcome, which was trust. For businesses, this is a key strategy: frame the conversation with regulators around the value you create, not just the tech you use. Host: It’s a narrative strategy, really. Expert: Precisely. And finally, this model provides predictability. The process of abstraction and elaboration creates a stable yet flexible framework. For businesses, that kind of regulatory environment is gold. It reduces uncertainty and gives you the confidence to invest and innovate for the long term. This is the path to avoiding that "gray space" we talked about earlier. Host: So to sum up, regulating new technology isn’t a top-down mandate; it's a collaborative journey. The key is to balance flexible, high-level principles with clear, specific rules. For businesses, the lesson is clear: get a seat at the table and help shape a predictable environment where innovation can thrive. Host: Alex Ian Sutherland, thank you for making such a complex topic so clear. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping business and technology.