AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
What it takes to control Al by design: human learning
AI & SOCIETY (2025)

What it takes to control Al by design: human learning

Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.

Problem Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.

Outcome - The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive.
- Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'.
- A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode.
- This approach results in high-performance AI systems that are unbiased and aligned with human values.
- The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Human-AI system, Control, Reciprocal learning, Feedback, Oversight
Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity
MIS Quarterly Executive (2024)

Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.

Problem Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.

Outcome - Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk.
- Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive.
- Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers.
- Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility.
- The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Communications of the Association for Information Systems (2024)

Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.

Problem Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.

Outcome - The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features.
- The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer.
- Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment.
- Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
Showing all 57 podcasts