AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Gameful Learning for a More Sustainable World Measuring the Effect of Design Elements on Long-Term Learning Outcomes in Correct Waste Sorting
Business & Information Systems Engineering (2021)

Gameful Learning for a More Sustainable World Measuring the Effect of Design Elements on Long-Term Learning Outcomes in Correct Waste Sorting

Greta Hoffmann, Jella Pfeiffer
This study investigates the effectiveness of using a mobile game app to teach correct municipal waste sorting. In a laboratory experiment, researchers compared the learning outcomes of participants who used the game with a control group that used standard, non-game educational materials. The study also specifically analyzed the impact of two game design elements, repetition and a look-up feature, on long-term knowledge retention and real-world application.

Problem Effective municipal waste sorting is a critical component of sustainability efforts, but many citizens lack the knowledge to do it correctly. Existing educational resources, such as paper-based flyers, are often ineffective for transmitting the large amount of information needed for long-term behavioral change, creating a gap in public education that hinders recycling efficiency.

Outcome - Game-based learning significantly enhanced waste sorting knowledge across all tested measures (in-game, multiple-choice, and real-life sorting) compared to traditional paper-based materials.
- The game successfully transferred learning to a real-life sorting task, a result that has been difficult to achieve in similar studies.
- The 'look-up' feature within the game was identified as a particularly promising and effective design element for improving learning outcomes.
- The combination of 'repetition' and 'look-up' game mechanics resulted in significantly higher learning outcomes, especially within the digital testing environments.
Gameful design, Serious game, Gamification, Game-design elements, Cognitive learning strategies, Sustainability, Knowledge transfer
When Self-Humanization Leads to Algorithm Aversion What Users Want from Decision Support Systems on Prosocial Microlending Platforms
Business & Information Systems Engineering (2022)

When Self-Humanization Leads to Algorithm Aversion What Users Want from Decision Support Systems on Prosocial Microlending Platforms

Pascal Oliver Heßler, Jella Pfeiffer, Sebastian Hafenbrädl
This study investigates why people often reject algorithmic advice, specifically focusing on prosocial (e.g., charitable) versus for-profit decisions on microlending platforms. Using an online experiment, the research examines how the decision-making context affects users' aversion to algorithms and their preference for more human-like decision support systems.

Problem While algorithmic decision support systems are powerful tools, many users are averse to using them in certain situations, which reduces their adoption and effectiveness. This study addresses the gap in understanding why this 'algorithm aversion' occurs by exploring how the desire to feel human in prosocial contexts, where empathy and autonomy are valued, influences user preferences for decision support.

Outcome - In prosocial contexts, like charitable microlending, people place a higher importance on human-like attributes such as empathy and autonomy compared to for-profit contexts.
- This increased focus on empathy and autonomy leads to a greater aversion to using computer-based algorithms for decision support.
- Users who are more averse to algorithms show a stronger preference for decision support systems that seem more human-like.
- Consequently, users on prosocial platforms prefer more human-like decision support than users on for-profit platforms, suggesting that systems should be designed differently depending on their purpose.
Self-humanization, Algorithm aversion, Empathy, Autonomy, Decision support, Prosocial platforms
Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics
International Conference on Wirtschaftsinformatik (2023)

Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics

Jeannette Stark, Thure Weimann, Felix Reinsch, Emily Hickmann, Maren Kählig, Carola Gißke, and Peggy Richter
This study reviews the psychological requirements for forming habits and analyzes how these requirements are implemented in existing mobile habit-tracking apps. Through a content analysis of 57 applications, the research identifies key design gaps and proposes a set of principles to inform the creation of more effective Digital Therapeutics (DTx) for long-term behavioral change.

Problem Noncommunicable diseases (NCDs), a leading cause of death, often require sustained lifestyle and behavioral changes. While many digital apps aim to support habit formation, they often fail to facilitate the entire process, particularly the later stages where a habit becomes automatic and reliance on technology should decrease, creating a gap in effective long-term support.

Outcome - Conventional habit apps primarily support the first two stages of habit formation: deciding on a habit and translating it into an initial behavior.
- Most apps neglect the crucial later stages of habit strengthening, where technology use should be phased out to allow the habit to become truly automatic.
- A conflict of interest was identified, as the commercial need for continuous user engagement in many apps contradicts the goal of making a user's new habit independent of the technology.
- The research proposes specific design principles for Digital Therapeutics (DTx) to better support all four stages of habit formation, offering a pathway for developing more effective tools for NCD prevention and treatment.
Behavioral Change, Digital Therapeutics, Habits, Habit Apps, Non-communicable diseases
Responsible AI Design: The Authenticity, Control, Transparency Theory
Journal of the Association for Information Systems (2025)

Responsible AI Design: The Authenticity, Control, Transparency Theory

Andrea Rivera, Kaveh Abhari, Bo Xiao
This study explores how to design Artificial Intelligence (AI) responsibly from the perspective of AI designers. Using a grounded theory approach based on interviews with industry professionals, the paper develops the Authenticity, Control, Transparency (ACT) theory as a new framework for creating ethical AI.

Problem Current guidelines for responsible AI are fragmented and lack a cohesive theory to guide practice, leading to inconsistent outcomes. Existing research often focuses narrowly on specific attributes like algorithms or harm minimization, overlooking the broader design decisions that shape an AI's behavior from its inception.

Outcome - The study introduces the Authenticity, Control, and Transparency (ACT) theory as a practical framework for responsible AI design.
- It identifies three core mechanisms—authenticity, control, and transparency—that translate ethical design decisions into responsible AI behavior.
- These mechanisms are applied across three key design domains: the AI's architecture, its algorithms, and its functional affordances (capabilities offered to users).
- The theory shifts the focus from merely minimizing harm to also maximizing the benefits of AI, providing a more balanced approach to ethical design.
Responsible AI, AI Ethics, AI Design, Authenticity, Transparency, Control, Algorithmic Accountability
Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures
Journal of the Association for Information Systems (2025)

Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures

Egil Øvrelid, Bendik Bygstad, Ole Hanseth
This study examines how public and professional discussions, known as discourses, shape major changes in large-scale digital systems like national e-health infrastructures. Using an 18-year in-depth case study of Norway's e-health development, the research analyzes how high-level strategic trends interact with on-the-ground practical challenges to drive fundamental shifts in technology programs.

Problem Implementing complex digital infrastructures like national e-health systems is notoriously difficult, and leaders often struggle to understand why some initiatives succeed while others fail. Previous research focused heavily on the role of powerful individuals or groups, paying less attention to the underlying, systemic influence of how different conversations about technology and strategy converge over time. This gap makes it difficult for policymakers to make sensible, long-term decisions and navigate the evolution of these critical systems.

Outcome - Major shifts in large digital infrastructure programs occur when high-level strategic discussions (macrodiscourses) and practical, operational-level discussions (microdiscourses) align and converge.
- This convergence happens through three distinct processes: 'connection' (a shared recognition of a problem), 'matching' (evaluating potential solutions that fit both high-level goals and practical needs), and 'merging' (making a decision and reconciling the different perspectives).
- The result of this convergence is a new "discursive formation"—a powerful, shared understanding that aligns stakeholders, technology, and strategy, effectively launching a new program and direction.
- Policymakers and managers can use this framework to better analyze the alignment between broad technological trends and their organization's specific, internal needs, leading to more informed and realistic strategic planning.
Discursive Formations, Discourse Convergence, Large-Scale Digital Infrastructures, E-Health Programs, Program Shifts, Sociotechnical Systems, IT Strategy
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Journal of the Association for Information Systems (2025)

Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare

Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.

Problem With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.

Outcome - The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary.
- The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities.
- New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context.
- The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Understanding the Ethics of Generative AI: Established and New Ethical Principles
Communications of the Association for Information Systems (2025)

Understanding the Ethics of Generative AI: Established and New Ethical Principles

Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.

Problem The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.

Outcome - Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI.
- Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design.
- Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems.
- The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects
Communications of the Association for Information Systems (2025)

Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects

Karin Väyrynen, Sari Laari-Salmela, Netta Iivari, Arto Lanamäki, Marianne Kinnula
This study explores how an information technology (IT) artefact evolves into a 'policy object' during the policymaking process, using a 4.5-year longitudinal case study of the Finnish Taximeter Law. The research proposes a conceptual framework that identifies three forms of the artefact as it moves through the policy cycle: a mental construct, a policy text, and a material IT artefact. This framework helps to understand the dynamics and challenges of regulating technology.

Problem While policymaking related to information technology is increasingly significant, the challenges stemming from the complex, multifaceted nature of IT are poorly understood. There is a specific gap in understanding how real-world IT artefacts are translated into abstract policy texts and how those texts are subsequently reinterpreted back into actionable technologies. This 'translation' process often leads to ambiguity and unintended consequences during implementation.

Outcome - Proposes a novel conceptual framework for understanding the evolution of an IT artefact as a policy object during a public policy cycle.
- Identifies three distinct forms the IT artefact takes: 1) a mental construct in the minds of policymakers and stakeholders, 2) a policy text such as a law, and 3) a material IT artefact as a real-world technology that aligns with the policy.
- Highlights the significant challenges in translating complex real-world technologies into abstract legal text and back again, which can create ambiguity and implementation difficulties.
- Distinguishes between IT artefacts at the policy level and IT artefacts as real-world technologies, showing how they evolve on separate but interconnected tracks.
IT Artefact, IT Regulation, Law, Policy Object, Policy Cycle, Public Policymaking, European Al Act
The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents
Communications of the Association for Information Systems (2025)

The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents

Soojin Roh, Shubin Yu
This paper investigates if, when, and how organizations can strategically use emojis in online communications when responding to information system (IS) incidents. Through three experimental studies conducted with Chinese and U.S. participants, the research examines how cultural context, the source of the message (CEO vs. company account), and incident type influence public perception.

Problem As companies increasingly use emojis in professional communications, there is a risk of missteps, especially in crisis situations. A lack of understanding of how emojis shape public perception across different cultures can lead to reputational harm, and existing research lacks empirical evidence on their strategic and cross-cultural application in responding to IS incidents.

Outcome - For Chinese audiences, using emojis in IS incident responses is generally positive, as it reduces psychological distance, alleviates anger, and increases perceptions of warmth and competence.
- The positive effect of emojis in China is stronger when used by an official company account rather than a CEO, and when the company is responsible for the incident.
- In contrast, U.S. audiences tend to evaluate the use of emojis negatively in incident responses.
- The negative perception among U.S. audiences is particularly strong when a CEO uses an emoji to respond to an internally-caused incident, leading to increased anger and perceptions of incompetence.
Emoji, Information System Incident, Social Media, Psychological Distance, Warmth, Competence
Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective
Communications of the Association for Information Systems (2024)

Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective

Prakash Dhavamani, Barney Tan, Daniel Gozman, Leben Johnson
This study investigates how a financial technology (Fintech) ecosystem was successfully established in a resource-constrained environment, using the Vizag Fintech Valley in India as a case study. The research examines the specific processes of gathering resources, building capabilities, and creating market value under significant budget limitations. It proposes a practical framework to guide the development of similar 'frugal' innovation hubs in other developing regions.

Problem There is limited research on how to launch and develop a Fintech ecosystem, especially in resource-scarce developing countries where the potential benefits like financial inclusion are greatest. Most existing studies focus on developed nations, and their findings are not easily transferable to environments with tight budgets, a lack of specialized talent, and less mature infrastructure. This knowledge gap makes it difficult for policymakers and entrepreneurs to create successful Fintech hubs in these regions.

Outcome - The research introduces a practical framework for building Fintech ecosystems in resource-scarce settings, called the Frugal Fintech Ecosystem Development (FFED) framework.
- The framework identifies three core stages: Structuring (gathering and prioritizing available resources), Bundling (combining resources to build capabilities), and Leveraging (using those capabilities to seize market opportunities).
- It highlights five key sub-processes for success in a frugal context: bricolaging (creatively using resources at hand), prioritizing, emulating (learning from established ecosystems), extrapolating, and sandboxing (safe, small-scale experimentation).
- The study shows that by orchestrating resources effectively, even frugal ecosystems can achieve outcomes comparable to those in well-funded regions, a concept termed 'equifinality'.
- The findings offer an evidence-based guide for policymakers to design regulations and support models that foster sustainable Fintech growth in developing economies.
Fintech Ecosystem, India, Frugal Innovation, Resource Orchestration, Case Study
Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees
Communications of the Association for Information Systems (2024)

Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Ashneet Kaur, Sudhanshu Maheshwari, Indranil Bose, Simarjeet Singh
This study conducts a systematic literature review to comprehensively explore the implications of Artificial Intelligence (AI) on employee privacy. It utilizes the privacy calculus framework to analyze the trade-offs organizations and employees face when integrating AI technologies in the workplace. The research evaluates how different types of AI technologies compromise or safeguard privacy and discusses their varying impacts.

Problem The rapid and pervasive adoption of AI in the workplace has enhanced efficiency but also raised significant concerns regarding employee privacy. There is a research gap in holistically understanding the broad implications of advancing AI technologies on employee privacy, as previous studies often focus on narrow applications without a comprehensive theoretical framework.

Outcome - The integration of AI in the workplace presents a trade-off, offering benefits like objective performance evaluation while posing significant risks such as over-surveillance and erosion of trust.
- The study categorizes AI into four advancing types (descriptive, predictive, prescriptive, and autonomous), each progressively increasing the complexity of privacy challenges and altering the employee privacy calculus.
- As AI algorithms become more advanced and opaque, it becomes more difficult for employees to understand how their data is used, leading to feelings of powerlessness and potential resistance.
- The paper identifies a significant lack of empirical research specifically on AI's impact on employee privacy, as opposed to the more widely studied area of consumer privacy.
- To mitigate privacy risks, the study recommends practical strategies for organizations, including transparent communication about data practices, involving employees in AI system design, and implementing strong ethical AI frameworks.
Artificial Intelligence, Employee Privacy, Privacy Calculus, Systematic Review, Workplace Surveillance, AI Ethics
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer
Communications of the Association for Information Systems (2025)

IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

Abhinav Shekhar, Rakesh Gupta, Sujeet Kumar Sharma
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.

Problem Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.

Outcome - Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias.
- Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process.
- Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation.
- Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective
Communications of the Association for Information Systems (2025)

Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.

Problem Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.

Outcome - A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation.
- Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals).
- AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI.
- The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability
Journal of the Association for Information Systems (2026)

Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability

Karen Stendal, Maung K. Sein, Devinder Thapa
This study explores how individuals with lifelong disabilities (PWLD) use virtual worlds, specifically Second Life, to achieve social inclusion. Using a qualitative approach with in-depth interviews and participant observation, the researchers analyzed how PWLD experience the platform's features. The goal was to develop a model explaining the process through which technology facilitates greater community participation and interpersonal connection for this marginalized group.

Problem People with lifelong disabilities often face significant social isolation and exclusion due to physical, mental, or sensory impairments that hinder their full participation in society. This lack of social connection can negatively impact their psychological and emotional well-being. This research addresses the gap in understanding the specific mechanisms by which technology, like virtual worlds, can help this population move from isolation to inclusion.

Outcome - Virtual worlds offer five key 'affordances' (action possibilities) that empower people with lifelong disabilities (PWLD).
- Three 'functional' affordances were identified: Communicability (interacting without barriers like hearing loss), Mobility (moving freely without physical limitations), and Personalizability (controlling one's digital appearance and whether to disclose a disability).
- These functional capabilities enable two 'social' affordances: Engageability (the ability to join in social activities) and Self-Actualizability (the ability to realize one's potential and help others).
- The study proposes an 'Affordance-Based Pathway Model' which shows how using these features helps PWLD build interpersonal relationships and participate in communities, leading to social inclusion.
Social Inclusion, Virtual Worlds (VW), People With Lifelong Disability (PWLD), Affordances, Second Life, Assistive Technology, Qualitative Study
Using Lessons from the COVID-19 Crisis to Move from Traditional to Adaptive IT Governance
MIS Quarterly Executive (2022)

Using Lessons from the COVID-19 Crisis to Move from Traditional to Adaptive IT Governance

Heiko Gewald, Heinz-Theo Wagner
This study analyzes how IT governance structures in nine international companies, particularly in regulated industries, were adapted during the COVID-19 crisis. It investigates the shift from rigid, formal governance to more flexible, relational models that enabled rapid decision-making. The paper provides recommendations on how to integrate these crisis-mode efficiencies to create a more adaptive IT governance system for post-crisis operations.

Problem Traditional IT governance systems are often slow, bureaucratic, and focused on control and risk avoidance, which makes them ineffective during a crisis requiring speed and flexibility. The COVID-19 pandemic exposed this weakness, as companies found their existing processes were too rigid to handle the sudden need for digital transformation and remote work. The study addresses how organizations can evolve their governance to be more agile without sacrificing regulatory compliance.

Outcome - Companies successfully adapted during the crisis by adopting leaner decision-making structures with fewer participants.
- The influence of IT experts in decision-making increased significantly, shifting the focus from risk-avoidance to finding the best functional solutions.
- Formal controls were complemented or replaced by relational governance based on social interaction, trust, and collaboration, which proved to be more efficient.
- The paper recommends permanently adopting these changes to create an 'adaptive IT governance' system that balances flexibility with compliance, ultimately delivering more business value.
IT governance, adaptive governance, crisis management, COVID-19, relational governance, formal governance, decision-making structures
Building an Artificial Intelligence Explanation Capability
MIS Quarterly Executive (2022)

Building an Artificial Intelligence Explanation Capability

Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.

Problem Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.

Outcome - AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value).
- To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX).
- The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI).
- Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation
Unexpected Benefits from a Shadow Environmental Management Information System
MIS Quarterly Executive (2021)

Unexpected Benefits from a Shadow Environmental Management Information System

Johann Kranz, Marina Fiedler, Anna Seidler, Kim Strunk, Anne Ixmeier
This study analyzes a German chemical company where a single employee, outside of the formal IT department, developed an Environmental Management Information System (EMIS). The paper examines how this grassroots 'shadow IT' project was successfully adopted company-wide, producing both planned and unexpected benefits. The findings are used to provide recommendations for business leaders on how to effectively implement information systems that drive both eco-sustainability and business value.

Problem Many companies struggle to effectively improve their environmental sustainability because critical information is often inaccessible, fragmented across different departments, or simply doesn't exist. This information gap prevents decision-makers from getting a unified view of their products' environmental impact, making it difficult to turn sustainability goals into concrete actions and strategic advantages.

Outcome - Greater Product Transparency: The system made it easy for employees to assess the environmental impact of materials and products.
- Improved Environmental Footprint: The company improved its energy and water efficiency, reduced carbon emissions, and increased waste productivity.
- Strategic Differentiation: The system provided a competitive advantage by enabling the company to meet growing customer demand for verified sustainable products, leading to increased sales and market share.
- Increased Profitability: Sustainable products became surprisingly profitable, contributing to higher turnover and outperforming competitors.
- More Robust Sourcing: The system helped identify supply chain risks, such as the scarcity of key raw materials, prompting proactive strategies to ensure resource availability.
- Empowered Employees: The tool spurred an increase in bottom-up, employee-driven sustainability initiatives beyond core business operations.
Environmental Management Information System (EMIS), Shadow IT, Corporate Sustainability, Eco-sustainability, Case Study, Strategic Value, Supply Chain Transparency
A Narrative Exploration of the Immersive Workspace 2040
MIS Quarterly Executive (2025)

A Narrative Exploration of the Immersive Workspace 2040

Alexander Richter, Shahper Richter, Nastaran Mohammadhossein
This study explores the future of work in the public sector by developing a speculative narrative, 'Immersive Workspace 2040.' Created through a structured methodology in collaboration with a New Zealand government ministry, the paper uses this narrative to make abstract technological trends tangible and analyze their deep structural implications.

Problem Public sector organizations face significant challenges adapting to disruptive digital innovations like AI due to traditionally rigid workforce structures and planning models. This study addresses the need for government leaders to move beyond incremental improvements and develop a forward-looking vision to prepare their workforce for profound, nonlinear changes.

Outcome - A major transformation will be the shift from fixed jobs to a 'Dynamic Talent Orchestration System,' where AI orchestrates teams based on verifiable skills for specific projects, fundamentally changing career paths and HR systems.
- The study identifies a 'Human-AI Governance Paradox,' where technologies designed to augment human intellect can also erode human agency and authority, necessitating safeguards like tiered autonomy frameworks to ensure accountability remains with humans.
- Unlike the private sector's focus on efficiency, public sector AI must be designed for value alignment, embedding principles like equity, fairness, and transparency directly into its operational logic to maintain public trust.
Future of Work, Immersive Workspace, Human-AI Collaboration, Public Sector Transformation, Narrative Foresight, AI Governance, Digital Transformation
Load More Showing 18 of 51