AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.

Problem Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.

Outcome - Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk.
- Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive.
- Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers.
- Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility.
- The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.

Problem Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.

Outcome - The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features.
- The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer.
- Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment.
- Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION

REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION

Stefan Seidel, Christoph J. Frick, Jan vom Brocke
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.

Problem Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.

Outcome - Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time.
- This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation.
- Elaboration involves specifying details and requirements to provide legal certainty and protect users.
- Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.
Technology regulation, prospective sensemaking, sensemaking, institutional construction, emerging technology, blockchain, token economy
Showing all 237 podcasts