AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport
International Conference on Wirtschaftsinformatik (2025)

Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport

Markus Ewert and Martin Bichler
This study proposes a new method for analyzing auction data to understand bidders' private valuations. It extends an existing framework by reformulating the estimation challenge as an optimal transport problem, which avoids the statistical limitations of traditional techniques. This novel approach uses a proxy equilibrium model to analytically evaluate bid distributions, leading to more accurate and robust estimations.

Problem Designing profitable auctions, such as setting an optimal reserve price, requires knowing how much bidders are truly willing to pay, but this information is hidden. Existing methods to estimate these valuations from observed bids often suffer from statistical biases and inaccuracies, especially with limited data, leading to poor auction design and lost revenue for sellers.

Outcome - The proposed optimal transport-based estimator consistently outperforms established kernel-based techniques, showing significantly lower error in estimating true bidder valuations.
- The new method is more robust, providing accurate estimates even in scenarios with high variance in bidding behavior where traditional methods fail.
- In practical tests, reserve prices set using the new method's estimates led to significant revenue gains for the auctioneer, while prices derived from older methods resulted in zero revenue.
Structural Estimation, Auctions, Equilibrium Learning, Optimal Transport, Econometrics
Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign
Jurnal SISFO (2025)

Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign

Ribka Devina Margaretha, Mahendrawathi ER, Sugianto Halim
This study addresses challenges in PT SEVIMA's customer onboarding process, where Account Managers (AMs) were not always aligned with client needs. Using a Business Process Management (BPM) Lifecycle approach combined with heuristic principles (Resequencing, Specialize, Control Addition, and Empower), the research redesigns the existing workflow. The goal is to improve the matching of AMs to clients, thereby increasing onboarding efficiency and customer satisfaction.

Problem PT SEVIMA, an IT startup for the education sector, struggled with an inefficient customer onboarding process. The primary issue was the frequent mismatch between the assigned Account Manager's skills and the specific, technical needs of the new client, leading to implementation delays and decreased satisfaction.

Outcome - Recommends grouping Account Managers (AMs) based on specialization profiles built from post-project evaluations.
- Suggests moving the initial client needs survey to occur before an AM is assigned to ensure a better match.
- Proposes involving the technical migration team earlier in the process to align strategies from the start.
- These improvements aim to enhance onboarding efficiency, reduce rework, and ultimately increase client satisfaction.
Business Process Redesign, Customer Onboarding, Knowledge-Intensive Process, Heuristics Method, Startup, BPM Lifecycle
How Boards of Directors Govern Artificial Intelligence
MIS Quarterly Executive (2023)

How Boards of Directors Govern Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.

Problem Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.

Outcome - Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence.
- Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings.
- Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions.
- To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges.
- Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
Evolution of the Metaverse
MIS Quarterly Executive (2023)

Evolution of the Metaverse

Mary Lacity, Jeffrey K. Mullins, Le Kuai
This paper explores the potential opportunities and risks of the emerging metaverse for business and society through an interview format with leading researchers. The study analyzes the current state of metaverse technologies, their potential business applications, and critical considerations for governance and ethical implementation for IT practitioners.

Problem Following renewed corporate interest and massive investment, the concept of the metaverse has generated significant hype, but businesses lack clarity on its definition, tangible value, and long-term impact. This creates uncertainty for leaders about how to approach the technology, differentiate it from past virtual worlds, and navigate the significant risks of surveillance, data privacy, and governance.

Outcome - The business value of the metaverse centers on providing richer, safer experiences for customers and employees, reducing costs, and meeting organizational goals through applications like immersive training, virtual collaboration, and digital twins.
- Companies face a critical choice between centralized 'Web 2' platforms, which monetize user data, and decentralized 'Web 3' models that offer users more control over their digital assets and identity.
- The metaverse can improve employee onboarding, training for dangerous tasks, and collaboration, offering a greater sense of presence than traditional videoconferencing.
- Key challenges include the lack of a single, interoperable metaverse (which is likely over a decade away), limited current capabilities of decentralized platforms, and the potential for negative consequences like addiction and surveillance.
- Businesses are encouraged to explore potential use cases, participate in creating open standards, and consider both the immense promise and potential perils before making significant investments.
Metaverse, Virtual Worlds, Augmented Reality, Web 3.0, Digital Twin, Business Strategy, Governance
How HireVue Created
MIS Quarterly Executive (2025)

How HireVue Created "Glass Box" Transparency for its AI Application

Monideepa Tarafdar, Irina Rets, Lindsey Zuloaga, Nathan Mondragon
This paper presents a case study on HireVue, a company that provides an AI application for assessing job interviews. It describes the transparency-related challenges HireVue faced and explains how it addressed them by developing a "glass box" approach, which focuses on making the entire system of AI development and deployment understandable, rather than just the technical algorithm.

Problem AI applications used for critical decisions, such as hiring, are often perceived as technical "black boxes." This lack of clarity creates significant challenges for businesses in trusting the technology, ensuring fairness, mitigating bias, and complying with regulations, which hinders the responsible adoption of AI in recruitment.

Outcome - The study introduces a "glass box" model for AI transparency, which shifts focus from the technical algorithm to the broader sociotechnical system, including design processes, client interactions, and organizational functions.
- HireVue implemented five types of transparency practices: pre-deployment client-focused, internal, post-deployment client-focused, knowledge-related, and audit-related.
- This multi-faceted approach helps build trust with clients, regulators, and applicants by providing clarity on the AI's application, limitations, and validation processes.
- The findings serve as a practical guide for other AI software companies on how to create effective and comprehensive transparency for their own applications, especially in high-stakes fields.
AI transparency, algorithmic hiring, glass box model, ethical AI, recruitment technology, HireVue, case study
How Germany Successfully Implemented Its Intergovernmental FLORA System
MIS Quarterly Executive (2025)

How Germany Successfully Implemented Its Intergovernmental FLORA System

Julia Amend, Simon Feulner, Alexander Rieger, Tamara Roth, Gilbert Fridgen, and Tobias Guggenberger
This paper presents a case study on Germany's implementation of FLORA, a blockchain-based IT system designed to manage the intergovernmental processing of asylum seekers. It analyzes how the project navigated legal and technical challenges across different government levels. Based on the findings, the study offers three key recommendations for successfully deploying similar complex, multi-agency IT systems in the public sector.

Problem Governments face significant challenges in digitalizing services that require cooperation across different administrative layers, such as federal and state agencies. Legal mandates often require these layers to maintain separate IT systems, which complicates data exchange and modernization. Germany's asylum procedure previously relied on manually sharing Excel-based lists between agencies, a process that was slow, error-prone, and created data privacy risks.

Outcome - FLORA replaced inefficient Excel-based lists with a decentralized system, enabling a more efficient and secure exchange of procedural information between federal and state agencies.
- The system created a 'single procedural source of truth,' which significantly improved the accuracy, completeness, and timeliness of information for case handlers.
- By streamlining information exchange, FLORA reduced the time required for initial stages of the asylum procedure by up to 50%.
- The blockchain-based architecture enhanced legal compliance by reducing procedural errors and providing a secure way to manage data that adheres to strict GDPR privacy requirements.
- The study recommends that governments consider decentralized IT solutions to avoid the high hidden costs of centralized systems, deploy modular solutions to break down legacy architectures, and use a Software-as-a-Service (SaaS) model to lower initial adoption barriers for agencies.
intergovernmental IT systems, digital government, blockchain, public sector innovation, case study, asylum procedure, Germany
The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems
MIS Quarterly Executive (2025)

The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

Oliver Krancher, Per Rådberg Nagbøl, Oliver Müller
This study examines the strategies employed by the Danish Business Authority (DBA), a pioneering public-sector adopter of AI, for the continuous evaluation of its AI systems. Through a case study of the DBA's practices and their custom X-RAI framework, the paper provides actionable recommendations for other organizations on how to manage AI systems responsibly after deployment.

Problem AI systems can degrade in performance over time, a phenomenon known as model drift, leading to inaccurate or biased decisions. Many organizations lack established procedures for the ongoing monitoring and evaluation of AI systems post-deployment, creating risks of operational failures, financial losses, and non-compliance with regulations like the EU AI Act.

Outcome - Organizations need a multi-faceted approach to AI evaluation, as single strategies like human oversight or periodic audits are insufficient on their own.
- The study presents the DBA's three-stage evaluation process: pre-production planning, in-production monitoring, and formal post-implementation evaluations.
- A key strategy is 'enveloping' AI systems and their evaluations, which means setting clear, pre-defined boundaries for the system's use and how it will be monitored to prevent misuse and ensure accountability.
- The DBA uses an MLOps platform and an 'X-RAI' (Transparent, Explainable, Responsible, Accurate AI) framework to ensure traceability, automate deployments, and guide risk assessments.
- Formal evaluations should use deliberate sampling, including random and negative cases, and 'blind' reviews (where caseworkers assess a case without seeing the AI's prediction) to mitigate human and machine bias.
AI evaluation, AI governance, model drift, responsible AI, MLOps, public sector AI, case study
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts
MIS Quarterly Executive (2025)

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.

Problem Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.

Outcome - The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact.
- It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity.
- The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders.
- It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework
How to Operationalize Responsible Use of Artificial Intelligence
MIS Quarterly Executive (2025)

How to Operationalize Responsible Use of Artificial Intelligence

Lorenn P. Ruster, Katherine A. Daniell
This study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices. Based on participatory action research with two startups, the paper provides a roadmap for crafting specific responsibility pledges and embedding them into organizational processes, moving beyond abstract ethical statements.

Problem Many organizations are committed to the responsible use of AI but struggle with how to implement it practically, creating a significant "principle-to-practice gap". This confusion can lead to inaction or superficial efforts known as "ethics-washing," where companies appear ethical without making substantive changes. The study addresses the lack of clear, actionable guidance for businesses, especially smaller ones, on where to begin.

Outcome - Presents a five-phase process for operationalizing responsible AI: 1) Buy-in, 2) Intuition-building, 3) Pledge-crafting, 4) Pledge-communicating, and 5) Pledge-embedding.
- Argues that responsible AI should be approached as a systems problem, considering organizational mindsets, culture, and processes, not just technical fixes.
- Recommends that organizations create contextualized, action-oriented "pledges" rather than simply adopting generic AI principles.
- Finds that investing in responsible AI practices early, even in small projects, helps build organizational capability and transfers to future endeavors.
- Provides a framework for businesses to navigate communication challenges, balancing transparency with commercial interests to build user trust.
Responsible AI, AI Ethics, Operationalization, Systems Thinking, AI Governance, Pledge-making, Startups
Transforming Energy Management with an AI-Enabled Digital Twin
MIS Quarterly Executive (2025)

Transforming Energy Management with an AI-Enabled Digital Twin

Hadi Ghanbari, Petter Nissinen
This paper reports on a case study of how one of Europe's largest district heating providers, called EnergyCo, implemented an AI-assisted digital twin to improve energy efficiency and sustainability. The study details the implementation process and its outcomes, providing six key recommendations for executives in other industries who are considering adopting digital twin technology.

Problem Large-scale energy providers face significant challenges in managing complex district heating networks due to fluctuating energy prices, the shift to decentralized renewable energy sources, and operational inefficiencies from siloed departments. Traditional control systems lack the comprehensive, real-time view needed to optimize the entire network, leading to energy loss, higher costs, and difficulties in achieving sustainability goals.

Outcome - The AI-enabled digital twin provided a comprehensive, real-time representation of the entire district heating network, replacing fragmented views from legacy systems.
- It enabled advanced simulation and optimization, allowing the company to improve operational efficiency, manage fluctuating energy prices, and move toward its carbon neutrality goals.
- The system facilitated scenario-based decision-making, helping operators forecast demand, optimize temperatures and pressures, and reduce heat loss.
- The digital twin enhanced cross-departmental collaboration by providing a shared, holistic view of the network's operations.
- It enabled a shift from reactive to proactive maintenance by using predictive insights to identify potential equipment failures before they occur, reducing costs and downtime.
Digital Twin, Energy Management, District Heating, AI, Cyber-Physical Systems, Sustainability, Case Study
How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making
MIS Quarterly Executive (2024)

How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making

Philipp Staudt, Rainer Hoffmann
This paper presents a case study of a large German utility company's successful transition to a data-driven organization. It outlines the strategy, which involved three core transformations: enabling the workforce, improving the data lifecycle, and implementing employee-centered data management. The study provides actionable recommendations for industrial organizations facing similar challenges.

Problem Many industrial companies, particularly in the utility sector, struggle to extract value from their data. The ongoing energy transition, with the rise of renewable energy sources and electric vehicles, has made traditional, heuristic-based decision-making obsolete, creating an urgent need for a robust corporate data culture to manage increasing complexity and ensure grid stability.

Outcome - A data culture was successfully established through three intertwined transformations: enabling the workforce, improving the data lifecycle, and transitioning to employee-centered data management.
- Enabling the workforce involved upskilling programs ('Data and AI Multipliers'), creating platforms for knowledge sharing, and clear communication to ensure widespread buy-in and engagement.
- The data lifecycle was improved by establishing new data infrastructure for real-time data, creating a central data lake, and implementing a strong data governance framework with new roles like 'data officers' and 'data stewards'.
- An employee-centric approach, featuring cross-functional teams, showcasing quick wins to demonstrate value, and transparent communication, was crucial for overcoming resistance and building trust.
- The transformation resulted in the deployment of over 50 data-driven solutions that replaced outdated processes and improved decision-making in real-time operations, maintenance, and long-term planning.
data culture, data-driven decision making, utility company, energy transition, change management, data governance, case study
The Hidden Causes of Digital Investment Failures
MIS Quarterly Executive (2024)

The Hidden Causes of Digital Investment Failures

Joe Peppard, R. M. Bastien
This study analyzes hundreds of digital projects to uncover the subtle, hidden root causes behind their frequent failure or underachievement. It moves beyond commonly cited symptoms, like budget overruns, to identify five fundamental organizational and structural issues that prevent companies from realizing value from their technology investments. The analysis is supported by an illustrative case study of a major insurance company's large-scale transformation program.

Problem Organizations invest heavily in digital technology expecting significant returns, but most struggle to achieve their goals, and project success rates have not improved over time. Despite an abundance of project management frameworks and best practices, companies often address the symptoms of failure rather than the underlying problems. This research addresses the gap by identifying the deep-rooted, often surprising causes for these persistent investment failures.

Outcome - The Illusion of Control: Business leaders believe they are controlling projects through metrics and governance, but this is an illusion that masks a lack of real influence over value creation.
- The Fallacy of the “Working System”: The primary goal becomes delivering a functional IT system on time and on budget, rather than achieving the intended business performance improvements.
- Conflicts of Interest: The conventional model of a single, centralized IT department creates inherent conflicts of interest, as the same group is responsible for designing, building, and quality-assuring systems.
- The IT Amnesia Syndrome: A project-by-project focus leads to a collective organizational memory loss about why and how systems were built, creating massive complexity and technical debt for future projects.
- Managing Expenses, Not Assets: Digital systems are treated as short-term expenses to be managed rather than long-term productive assets whose value must be cultivated over their entire lifecycle.
digital investment, project failure, IT governance, root cause analysis, business value, single-counter IT model, technical debt
Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation
MIS Quarterly Executive (2024)

Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation

Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.

Problem Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.

Outcome - Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences.
- Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce.
- Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity.
- Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged.
- Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once.
- Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations
MIS Quarterly Executive (2024)

Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations

Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.

Problem A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.

Outcome - Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management.
- These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'.
- The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions.
- Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
What it takes to control Al by design: human learning
AI & SOCIETY (2025)

What it takes to control Al by design: human learning

Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.

Problem Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.

Outcome - The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive.
- Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'.
- A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode.
- This approach results in high-performance AI systems that are unbiased and aligned with human values.
- The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Human-AI system, Control, Reciprocal learning, Feedback, Oversight
Showing all 51 podcasts