AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Promises and Perils of Generative AI in Cybersecurity
MIS Quarterly Executive (2025)

Promises and Perils of Generative AI in Cybersecurity

Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.

Problem With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.

Outcome - GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture.
- Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education.
- Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly.
- A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset.
- Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification.
- Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
Successfully Mitigating AI Management Risks to Scale AI Globally
MIS Quarterly Executive (2025)

Successfully Mitigating AI Management Risks to Scale AI Globally

Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.

Problem Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.

Outcome - Missing or falsely evaluated potential AI use case opportunities.
- Algorithmic training and data quality issues.
- Task-specific system complexities.
- Mismanagement of system stakeholders.
- Threats from provider and system dependencies.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
Transforming Energy Management with an AI-Enabled Digital Twin
MIS Quarterly Executive (2025)

Transforming Energy Management with an AI-Enabled Digital Twin

Hadi Ghanbari, Petter Nissinen
This paper reports on a case study of how one of Europe's largest district heating providers, called EnergyCo, implemented an AI-assisted digital twin to improve energy efficiency and sustainability. The study details the implementation process and its outcomes, providing six key recommendations for executives in other industries who are considering adopting digital twin technology.

Problem Large-scale energy providers face significant challenges in managing complex district heating networks due to fluctuating energy prices, the shift to decentralized renewable energy sources, and operational inefficiencies from siloed departments. Traditional control systems lack the comprehensive, real-time view needed to optimize the entire network, leading to energy loss, higher costs, and difficulties in achieving sustainability goals.

Outcome - The AI-enabled digital twin provided a comprehensive, real-time representation of the entire district heating network, replacing fragmented views from legacy systems.
- It enabled advanced simulation and optimization, allowing the company to improve operational efficiency, manage fluctuating energy prices, and move toward its carbon neutrality goals.
- The system facilitated scenario-based decision-making, helping operators forecast demand, optimize temperatures and pressures, and reduce heat loss.
- The digital twin enhanced cross-departmental collaboration by providing a shared, holistic view of the network's operations.
- It enabled a shift from reactive to proactive maintenance by using predictive insights to identify potential equipment failures before they occur, reducing costs and downtime.
Digital Twin, Energy Management, District Heating, AI, Cyber-Physical Systems, Sustainability, Case Study
How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making
MIS Quarterly Executive (2024)

How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making

Philipp Staudt, Rainer Hoffmann
This paper presents a case study of a large German utility company's successful transition to a data-driven organization. It outlines the strategy, which involved three core transformations: enabling the workforce, improving the data lifecycle, and implementing employee-centered data management. The study provides actionable recommendations for industrial organizations facing similar challenges.

Problem Many industrial companies, particularly in the utility sector, struggle to extract value from their data. The ongoing energy transition, with the rise of renewable energy sources and electric vehicles, has made traditional, heuristic-based decision-making obsolete, creating an urgent need for a robust corporate data culture to manage increasing complexity and ensure grid stability.

Outcome - A data culture was successfully established through three intertwined transformations: enabling the workforce, improving the data lifecycle, and transitioning to employee-centered data management.
- Enabling the workforce involved upskilling programs ('Data and AI Multipliers'), creating platforms for knowledge sharing, and clear communication to ensure widespread buy-in and engagement.
- The data lifecycle was improved by establishing new data infrastructure for real-time data, creating a central data lake, and implementing a strong data governance framework with new roles like 'data officers' and 'data stewards'.
- An employee-centric approach, featuring cross-functional teams, showcasing quick wins to demonstrate value, and transparent communication, was crucial for overcoming resistance and building trust.
- The transformation resulted in the deployment of over 50 data-driven solutions that replaced outdated processes and improved decision-making in real-time operations, maintenance, and long-term planning.
data culture, data-driven decision making, utility company, energy transition, change management, data governance, case study
Leveraging Information Systems for Environmental Sustainability and Business Value
MIS Quarterly Executive (2024)

Leveraging Information Systems for Environmental Sustainability and Business Value

Anne Ixmeier, Franziska Wagner, Johann Kranz
This study analyzes 31 articles from practitioner journals to understand how businesses can use Information Systems (IS) to enhance environmental sustainability. Based on a comprehensive literature review, the research provides five practical recommendations for managers to bridge the gap between sustainability goals and actual implementation, ultimately creating business value.

Problem Many businesses face growing pressure to improve their environmental sustainability but struggle to translate sustainability initiatives into tangible business value. Managers are often unclear on how to effectively leverage information systems to achieve both environmental and financial goals, a challenge referred to as the 'sustainability implementation gap'.

Outcome - Legitimize sustainability by using IS to create awareness and link environmental metrics to business value.
- Optimize processes, products, and services by using IS to reduce environmental impact and improve eco-efficiency.
- Internalize sustainability by integrating it into core business strategies and decision-making, informed by data from environmental management systems.
- Standardize sustainability data by establishing robust data governance to ensure information is accessible, comparable, and transparent across the value chain.
- Collaborate with external partners by using IS to build strategic partnerships and ecosystems that can collectively address complex sustainability challenges.
Information Systems, Environmental Sustainability, Green IS, Business Value, Corporate Strategy, Sustainability Implementation
How Audi Scales Artificial Intelligence in Manufacturing
MIS Quarterly Executive (2024)

How Audi Scales Artificial Intelligence in Manufacturing

André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
This paper presents a case study on how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality inspection in its manufacturing press shops. It analyzes Audi's four-year journey, from initial exploration to multi-site deployment, to identify key strategies and challenges. The study provides actionable recommendations for senior leaders aiming to capture business value by scaling AI innovations.

Problem Many organizations struggle to move their AI initiatives from the pilot phase to full-scale operational use, failing to realize the technology's full economic potential. This is a particular challenge in manufacturing, where integrating AI with legacy systems and processes presents significant barriers. This study addresses how a company can overcome these challenges to successfully scale an AI solution and unlock long-term business value.

Outcome - Audi successfully scaled an AI-based system to automate the detection of cracks in sheet metal parts, a crucial quality control step in its press shops.
- The success was driven by a strategic four-stage approach: Exploring, Developing, Implementing, and Scaling, with a focus on designing for scalability from the outset.
- Key success factors included creating a single, universal AI model for multiple deployments, leveraging data from various sources to improve the model, and integrating the solution into the broader Volkswagen Group's digital production platform to create synergies.
- The study highlights the importance of decoupling value from cost, which Audi achieved by automating monitoring and deployment pipelines, thereby scaling operations without proportionally increasing expenses.
- Recommendations for other businesses include making AI scaling a strategic priority, fostering collaboration between AI experts and domain specialists, and streamlining operations through automation and robust governance.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Inspection
The Promise and Perils of Low-Code AI Platforms
MIS Quarterly Executive (2024)

The Promise and Perils of Low-Code AI Platforms

Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.

Problem As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.

Outcome - The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge.
- Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first.
- Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy.
- Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations
MIS Quarterly Executive (2024)

Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations

Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.

Problem A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.

Outcome - Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management.
- These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'.
- The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions.
- Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant
MIS Quarterly Executive (2024)

How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant

Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.

Problem Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.

Outcome - Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources.
- Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns.
- Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation.
- Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success.
- The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME
EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH
MIS Quarterly (2025)

EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH

Abdul Sesay, Elena Karahanna, and Marie-Claude Boudreau
This study investigates how the effects of new technology, specifically body-worn cameras (BWCs), unfold within organizations over time. Using a multi-site case study of three U.S. police departments, the research develops a process model to explain how the consequences of IT implementation emerge. The study identifies three key phases in this process: individuation (selecting the technology and related policies), composition (combining the technology with users), and actualization (using the technology in real-world interactions).

Problem When organizations implement new technology, the results are often unpredictable, with outcomes varying widely between different settings. Existing research has not fully explained why a technology can be successful in one organization but fail in another. This study addresses the gap in understanding how the consequences of a new technology, like police body-worn cameras, actually develop and evolve into established organizational practices.

Outcome - The process through which technology creates new behaviors and practices is complex and non-linear, occurring in three distinct phases (individuation, composition, and actualization).
- Successful implementation is not guaranteed; it depends on the careful alignment of the technology itself (material components) with policies, training, and user adoption (expressive components) at each stage.
- The study found that of the three police departments, only one successfully implemented body cameras because it carefully selected high-quality equipment, developed specific policies for its use, and ensured officers were trained and held accountable.
- The other two departments experienced failure or delays due to poor quality equipment, generic policies, and inconsistent use, which prevented new, positive practices from taking hold.
- The model shows that outcomes emerge over time and may require continuous adjustments, demonstrating that success is an ongoing process, not a one-time event.
IT implementation, Assemblage theory, body-worn camera, organizational change, police technology, process model
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Communications of the Association for Information Systems (2024)

Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.

Problem Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.

Outcome - The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features.
- The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer.
- Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment.
- Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
Showing all 101 podcasts