AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
How Boards of Directors Govern Artificial Intelligence

How Boards of Directors Govern Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.

Problem Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.

Outcome - Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence.
- Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings.
- Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions.
- To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges.
- Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
Fueling Digital Transformation with Citizen Developers and Low-Code Development

Fueling Digital Transformation with Citizen Developers and Low-Code Development

Ainara Novales Rubén Mancha
This study examines how organizations can leverage low-code development platforms and citizen developers (non-technical employees) to accelerate digital transformation. Through in-depth case studies of two early adopters, Hortilux and Volvo Group, along with interviews from seven other firms, the paper identifies key strategies and challenges. The research provides five actionable recommendations for business leaders to successfully implement low-code initiatives.

Problem Many organizations struggle to keep pace with digital innovation due to a persistent shortage and high cost of professional software developers. This creates a significant bottleneck in application development, slowing down responsiveness to customer needs and hindering digital transformation goals. The study addresses how to overcome this resource gap by empowering business users to create their own software solutions.

Outcome - Set a clear strategy for selecting the right use cases for low-code development, starting with simple, low-complexity tasks like process automation.
- Identify, assign, and provide training to upskill tech-savvy employees into citizen developers, ensuring they have the support and guidance needed.
- Establish a dedicated low-code team or department to provide organization-wide support, training, and governance for citizen development initiatives.
- Ensure the low-code architecture is extendable, reusable, and up-to-date to avoid creating complex, siloed applications that are difficult to maintain.
- Evaluate the technical requirements and constraints of different solutions to select the low-code platform that best fits the organization's specific needs.
low-code development, citizen developers, digital transformation, IT strategy, application development, software development bottleneck, case study
F. Warren McFarlan's Pioneering Role in Impacting IT Management Through Academic Research

F. Warren McFarlan's Pioneering Role in Impacting IT Management Through Academic Research

Blake Ives, Mary Lacity, Jeanne Ross
This article chronicles the distinguished career of F. Warren McFarlan, a seminal figure in the field of IT management. Based on interviews with McFarlan and his colleagues, as well as archival material, the paper details his immense contribution to bridging the divide between academic research and practical IT management. It highlights his methods, influential frameworks, and enduring legacy in educating generations of IT practitioners and researchers.

Problem There is often a significant gap between academic research and the practical needs of business managers. Academics typically focus on theory and description, while business leaders require actionable, prescriptive insights. This paper addresses this challenge by examining the career of F. Warren McFarlan as a case study in how to successfully produce practice-based research that is valuable to both the academic and business communities.

Outcome - F. Warren McFarlan was a foundational figure who played a pioneering role in establishing IT management as a respected academic and business discipline.
- He effectively bridged the gap between academia and industry by developing practical frameworks and using the case study method to teach senior executives how to manage technology strategically.
- Through his extensive body of research, including over 300 cases and numerous influential articles, he provided managers with accessible tools to assess IT project risk and align technology with business strategy.
- McFarlan was instrumental in championing academic outlets for practice-based research, notably serving as editor-in-chief of MIS Quarterly during a critical period to ensure its survival and relevance.
- His legacy includes not only his own research but also his mentorship of junior faculty and his role in building the IT management program at Harvard Business School.
F. Warren McFarlan, IT Management, Practice-Based Research, Academic-Practitioner Gap, Case Study Research, Harvard Business School, Strategic IT
Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks

Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks

Donald Wynn, Jr., W. David Salisbury, Mark Winemiller
This paper presents a case study of a small U.S. manufacturing company that suffered two distinct ransomware attacks four years apart, despite strengthening its cybersecurity after the first incident. The study analyzes both attacks, the company's response, and the lessons learned from the experiences. The goal is to provide actionable recommendations to help other small and medium-sized enterprises (SMEs) improve their defenses and recovery strategies against evolving cyber threats.

Problem Small and medium-sized enterprises (SMEs) face unique cybersecurity challenges due to significant resource constraints compared to larger corporations. They often lack the financial capacity, specialized expertise, and trained workforce to implement and maintain adequate technical and procedural controls. This vulnerability is increasingly exploited by cybercriminals, with a high percentage of ransomware attacks specifically targeting these smaller, less-defended businesses.

Outcome - All businesses are targets: The belief in 'security by obscurity' is a dangerous misconception; any online presence makes a business a potential target for cyberattacks.
- Comprehensive backups are essential: Backups must include not only data but also system configurations and software to enable a full and timely recovery.
- Management buy-in is critical: Senior leadership must understand the importance of cybersecurity and provide the necessary funding and organizational support for robust defense measures.
- People are a key vulnerability: Technical defenses can be bypassed by human error, as demonstrated by the second attack which originated from a phishing email, underscoring the need for continuous employee training.
- Cybercrime is an evolving 'arms race': Attackers are becoming increasingly sophisticated, professional, and organized, requiring businesses to continually adapt and strengthen their defenses.
ransomware, cybersecurity, SME, case study, incident response, cyber attack, information security
Evolution of the Metaverse

Evolution of the Metaverse

Mary Lacity, Jeffrey K. Mullins, Le Kuai
This paper explores the potential opportunities and risks of the emerging metaverse for business and society through an interview format with leading researchers. The study analyzes the current state of metaverse technologies, their potential business applications, and critical considerations for governance and ethical implementation for IT practitioners.

Problem Following renewed corporate interest and massive investment, the concept of the metaverse has generated significant hype, but businesses lack clarity on its definition, tangible value, and long-term impact. This creates uncertainty for leaders about how to approach the technology, differentiate it from past virtual worlds, and navigate the significant risks of surveillance, data privacy, and governance.

Outcome - The business value of the metaverse centers on providing richer, safer experiences for customers and employees, reducing costs, and meeting organizational goals through applications like immersive training, virtual collaboration, and digital twins.
- Companies face a critical choice between centralized 'Web 2' platforms, which monetize user data, and decentralized 'Web 3' models that offer users more control over their digital assets and identity.
- The metaverse can improve employee onboarding, training for dangerous tasks, and collaboration, offering a greater sense of presence than traditional videoconferencing.
- Key challenges include the lack of a single, interoperable metaverse (which is likely over a decade away), limited current capabilities of decentralized platforms, and the potential for negative consequences like addiction and surveillance.
- Businesses are encouraged to explore potential use cases, participate in creating open standards, and consider both the immense promise and potential perils before making significant investments.
Metaverse, Virtual Worlds, Augmented Reality, Web 3.0, Digital Twin, Business Strategy, Governance
Boundary Management Strategies for Leading Digital Transformation in Smart Cities

Boundary Management Strategies for Leading Digital Transformation in Smart Cities

Jocelyn Cranefield, Jan Pries-Heje
This study investigates the leadership challenges inherent in smart city digital transformations. Based on in-depth interviews with leaders from 12 cities, the research identifies common obstacles and describes three 'boundary management' strategies leaders use to overcome them and drive sustainable change.

Problem Cities struggle to scale up smart city initiatives beyond the pilot stage because of a fundamental conflict between traditional, siloed city bureaucracy and the integrated, data-driven logic of a smart city. This clash creates significant organizational, political, and cultural barriers that impede progress and prevent the realization of long-term benefits for citizens.

Outcome - Identifies eight key challenges for smart city leaders, including misalignment of municipal structures, restrictive data policies, resistance to innovation, and city politics.
- Finds that successful smart city leaders act as expert 'boundary spanners,' navigating the divide between the traditional institutional logic of city governance and the emerging logic of smart cities.
- Proposes a framework of three boundary management strategies leaders use: 1) Boundary Bridging to generate buy-in and knowledge, 2) Boundary Buffering to protect projects from resistance, and 3) Boundary Building to create new, sustainable governance structures.
smart cities, digital transformation, leadership, boundary management, institutional logic, urban governance, innovation
Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks

Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks

Masoud Afshari-Mofrad, Alireza Amrollahi, Babak Abedin
This study investigates the need for flexibility and speed in creating and updating cybersecurity rules within organizations. Through in-depth interviews with cybersecurity professionals, the research identifies key areas of digital risk and provides practical recommendations for businesses to develop more agile and adaptive security policies.

Problem In the face of rapidly evolving cyber threats, many organizations rely on static, outdated cybersecurity policies that are only updated after a security breach occurs. This reactive approach leaves them vulnerable to new attack methods, risks from new technologies, and threats from business partners, creating a significant security gap.

Outcome - Update cybersecurity policies to address risks from outdated legacy systems by implementing modern digital asset and vulnerability management.
- Adapt policies to address emerging technologies like AI by enhancing technology scouting and establishing a resilient cyber risk management framework.
- Strengthen policies for third-party vendors by conducting agile risk assessments and regularly reviewing security controls in contracts.
- Build flexible policies for disruptive external events (like pandemics or geopolitical tensions) through continuous employee training and robust business continuity plans.
agile cybersecurity, cybersecurity policymaking, digital risk, adaptive security, risk management, third-party risk, legacy systems
Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Olga Biedova, Lakshmi Goel, Justin Zhang, Steven A. Williamson, Blake Ives
This study analyzes an alternative cybersecurity information-sharing forum centered on the extended value chain of a single company in the forest and paper products industry. The paper explores the forum's design, execution, and challenges to provide recommendations for similar company-specific collaborations. The goal is to enhance cybersecurity resilience across interconnected business partners by fostering a more trusting and relevant environment for sharing best practices.

Problem As cyberthreats become more complex, industries with interconnected information and operational technologies (IT/OT) face significant vulnerabilities. Despite government and industry calls for greater collaboration, inter-organizational cybersecurity information sharing remains sporadic due to concerns over confidentiality, competitiveness, and lack of trust. Standard sector-based sharing initiatives can also be too broad to address the specific needs of a company and its unique value chain partners.

Outcome - A company-led, value-chain-specific cybersecurity forum is an effective alternative to broader industry groups, fostering greater trust and more relevant discussions among business partners.
- Key success factors for such a forum include inviting the right participants (security strategy leaders), establishing clear ground rules to encourage open dialogue, and using external facilitators to ensure neutrality.
- The forum successfully shifted the culture from one of distrust to one of transparency and collaboration, leading participants to be more open about sharing experiences, including previous security breaches.
- Participants gained valuable insights into the security maturity of their partners, leading to tangible improvements in cybersecurity practices, such as updating security playbooks, adopting new risk metrics, and enhancing third-party risk management.
- The collaborative model strengthens the entire value chain, as companies learn from each other's strategies, tools, and policies to collectively improve their defense against common threats.
cybersecurity, information sharing, extended value chain, supply chain security, cyber resilience, forest products industry, inter-organizational collaboration
Unraveling the Role of Cyber Insurance in Fortifying Organizational Cybersecurity

Unraveling the Role of Cyber Insurance in Fortifying Organizational Cybersecurity

Wojciech Strzelczyk, Karolina Puławska
This study explores how cyber insurance serves as more than just a financial tool for compensating victims of cyber incidents. Based on in-depth interviews with insurance industry experts and policy buyers, the research analyzes how insurance improves an organization's cybersecurity across three distinct stages: pre-purchase, post-purchase, and post-cyberattack.

Problem As businesses increasingly rely on digital technologies, they face a growing risk of cyberattacks that can lead to severe financial losses, reputational harm, and regulatory penalties. Many companies possess inadequate cybersecurity measures, and there is a need to understand how external mechanisms like insurance can proactively strengthen defenses rather than simply covering losses after an attack.

Outcome - Cyber insurance actively enhances an organization's security posture, not just providing financial compensation after an incident.
- The pre-purchase underwriting process forces companies to rigorously evaluate and improve their cybersecurity practices to even qualify for a policy.
- Post-purchase, insurers require continuous improvement through audits and training, often providing resources and expertise to help clients strengthen their defenses.
- Following an attack, cyber insurance provides access to critical incident management services, including expert support for damage containment, system restoration, and post-incident analysis to prevent future breaches.
cyber insurance, cybersecurity, risk management, organizational cybersecurity, incident response, underwriting
How HireVue Created

How HireVue Created "Glass Box" Transparency for its AI Application

Monideepa Tarafdar, Irina Rets, Lindsey Zuloaga, Nathan Mondragon
This paper presents a case study on HireVue, a company that provides an AI application for assessing job interviews. It describes the transparency-related challenges HireVue faced and explains how it addressed them by developing a "glass box" approach, which focuses on making the entire system of AI development and deployment understandable, rather than just the technical algorithm.

Problem AI applications used for critical decisions, such as hiring, are often perceived as technical "black boxes." This lack of clarity creates significant challenges for businesses in trusting the technology, ensuring fairness, mitigating bias, and complying with regulations, which hinders the responsible adoption of AI in recruitment.

Outcome - The study introduces a "glass box" model for AI transparency, which shifts focus from the technical algorithm to the broader sociotechnical system, including design processes, client interactions, and organizational functions.
- HireVue implemented five types of transparency practices: pre-deployment client-focused, internal, post-deployment client-focused, knowledge-related, and audit-related.
- This multi-faceted approach helps build trust with clients, regulators, and applicants by providing clarity on the AI's application, limitations, and validation processes.
- The findings serve as a practical guide for other AI software companies on how to create effective and comprehensive transparency for their own applications, especially in high-stakes fields.
AI transparency, algorithmic hiring, glass box model, ethical AI, recruitment technology, HireVue, case study
How Germany Successfully Implemented Its Intergovernmental FLORA System

How Germany Successfully Implemented Its Intergovernmental FLORA System

Julia Amend, Simon Feulner, Alexander Rieger, Tamara Roth, Gilbert Fridgen, and Tobias Guggenberger
This paper presents a case study on Germany's implementation of FLORA, a blockchain-based IT system designed to manage the intergovernmental processing of asylum seekers. It analyzes how the project navigated legal and technical challenges across different government levels. Based on the findings, the study offers three key recommendations for successfully deploying similar complex, multi-agency IT systems in the public sector.

Problem Governments face significant challenges in digitalizing services that require cooperation across different administrative layers, such as federal and state agencies. Legal mandates often require these layers to maintain separate IT systems, which complicates data exchange and modernization. Germany's asylum procedure previously relied on manually sharing Excel-based lists between agencies, a process that was slow, error-prone, and created data privacy risks.

Outcome - FLORA replaced inefficient Excel-based lists with a decentralized system, enabling a more efficient and secure exchange of procedural information between federal and state agencies.
- The system created a 'single procedural source of truth,' which significantly improved the accuracy, completeness, and timeliness of information for case handlers.
- By streamlining information exchange, FLORA reduced the time required for initial stages of the asylum procedure by up to 50%.
- The blockchain-based architecture enhanced legal compliance by reducing procedural errors and providing a secure way to manage data that adheres to strict GDPR privacy requirements.
- The study recommends that governments consider decentralized IT solutions to avoid the high hidden costs of centralized systems, deploy modular solutions to break down legacy architectures, and use a Software-as-a-Service (SaaS) model to lower initial adoption barriers for agencies.
intergovernmental IT systems, digital government, blockchain, public sector innovation, case study, asylum procedure, Germany
The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

Oliver Krancher, Per Rådberg Nagbøl, Oliver Müller
This study examines the strategies employed by the Danish Business Authority (DBA), a pioneering public-sector adopter of AI, for the continuous evaluation of its AI systems. Through a case study of the DBA's practices and their custom X-RAI framework, the paper provides actionable recommendations for other organizations on how to manage AI systems responsibly after deployment.

Problem AI systems can degrade in performance over time, a phenomenon known as model drift, leading to inaccurate or biased decisions. Many organizations lack established procedures for the ongoing monitoring and evaluation of AI systems post-deployment, creating risks of operational failures, financial losses, and non-compliance with regulations like the EU AI Act.

Outcome - Organizations need a multi-faceted approach to AI evaluation, as single strategies like human oversight or periodic audits are insufficient on their own.
- The study presents the DBA's three-stage evaluation process: pre-production planning, in-production monitoring, and formal post-implementation evaluations.
- A key strategy is 'enveloping' AI systems and their evaluations, which means setting clear, pre-defined boundaries for the system's use and how it will be monitored to prevent misuse and ensure accountability.
- The DBA uses an MLOps platform and an 'X-RAI' (Transparent, Explainable, Responsible, Accurate AI) framework to ensure traceability, automate deployments, and guide risk assessments.
- Formal evaluations should use deliberate sampling, including random and negative cases, and 'blind' reviews (where caseworkers assess a case without seeing the AI's prediction) to mitigate human and machine bias.
AI evaluation, AI governance, model drift, responsible AI, MLOps, public sector AI, case study
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.

Problem Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.

Outcome - The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact.
- It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity.
- The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders.
- It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework
Promises and Perils of Generative AI in Cybersecurity

Promises and Perils of Generative AI in Cybersecurity

Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.

Problem With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.

Outcome - GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture.
- Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education.
- Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly.
- A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset.
- Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification.
- Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
How to Operationalize Responsible Use of Artificial Intelligence

How to Operationalize Responsible Use of Artificial Intelligence

Lorenn P. Ruster, Katherine A. Daniell
This study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices. Based on participatory action research with two startups, the paper provides a roadmap for crafting specific responsibility pledges and embedding them into organizational processes, moving beyond abstract ethical statements.

Problem Many organizations are committed to the responsible use of AI but struggle with how to implement it practically, creating a significant "principle-to-practice gap". This confusion can lead to inaction or superficial efforts known as "ethics-washing," where companies appear ethical without making substantive changes. The study addresses the lack of clear, actionable guidance for businesses, especially smaller ones, on where to begin.

Outcome - Presents a five-phase process for operationalizing responsible AI: 1) Buy-in, 2) Intuition-building, 3) Pledge-crafting, 4) Pledge-communicating, and 5) Pledge-embedding.
- Argues that responsible AI should be approached as a systems problem, considering organizational mindsets, culture, and processes, not just technical fixes.
- Recommends that organizations create contextualized, action-oriented "pledges" rather than simply adopting generic AI principles.
- Finds that investing in responsible AI practices early, even in small projects, helps build organizational capability and transfers to future endeavors.
- Provides a framework for businesses to navigate communication challenges, balancing transparency with commercial interests to build user trust.
Responsible AI, AI Ethics, Operationalization, Systems Thinking, AI Governance, Pledge-making, Startups
Successfully Mitigating AI Management Risks to Scale AI Globally

Successfully Mitigating AI Management Risks to Scale AI Globally

Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.

Problem Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.

Outcome - Missing or falsely evaluated potential AI use case opportunities.
- Algorithmic training and data quality issues.
- Task-specific system complexities.
- Mismanagement of system stakeholders.
- Threats from provider and system dependencies.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning

How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning

Leonie Rebecca Freise, Eva Ritz, Ulrich Bretschneider, Roman Rietsche, Gunter Beitinger, and Jan Marco Leimeister
This case study examines how Siemens successfully implemented a human-centric, bottom-up approach to employee reskilling and upskilling through digital learning. The paper presents a four-phase model for leveraging information systems to address skill gaps and provides five key recommendations for organizations to foster lifelong learning in dynamic manufacturing environments.

Problem The rapid digital transformation in manufacturing is creating a significant skills gap, with a high percentage of companies reporting shortages. Traditional training methods are often not scalable or adaptable enough to meet these evolving demands, presenting a major challenge for organizations trying to build a future-ready workforce.

Outcome - The study introduces a four-phase model for developing human-centric digital learning: 1) Recognizing employee needs, 2) Identifying key employee traits (like self-regulation and attitude), 3) Developing tailored strategies, and 4) Aligning strategies with organizational goals.
- Key employee needs for successful digital learning include task-oriented courses, peer exchange, on-the-job training, regular feedback, personalized learning paths, and micro-learning formats ('learning nuggets').
- The paper proposes four distinct learning strategies based on employees' attitude and self-regulated learning skills, ranging from community mentoring for those low in both, to personalized courses for those high in both.
- Five practical recommendations for companies are provided: 1) Foster a lifelong learning culture, 2) Tailor digital learning programs, 3) Create dedicated spaces for collaboration, 4) Incorporate flexible training formats, and 5) Use analytics to provide feedback.
digital learning, upskilling, reskilling, workforce development, human-centric, manufacturing, case study
A Three-Layer Model for Successful Organizational Digital Transformation

A Three-Layer Model for Successful Organizational Digital Transformation

Ferry Nolte, Alexander Richter, Nadine Guhr
This study analyzes the digital transformation journey on the shop floor of automotive supplier Continental AG. Based on this case study, the paper proposes a practical three-layer model—IT evolution, work practices evolution, and mindset evolution—to guide organizations through successful digital transformation. The model provides recommended actions for aligning these layers to reduce implementation risks and improve outcomes.

Problem Many industrial companies struggle with digital transformation, particularly on the shop floor, where environments are often poorly integrated with digital technology. These transformation efforts are frequently implemented as a 'big bang,' overwhelming workers with new technologies and revised work practices, which can lead to resistance, failure to adopt new systems, and the loss of experienced employees.

Outcome - Successful digital transformation requires a coordinated and synchronized evolution across three interdependent layers: IT, work practices, and employee mindset.
- The paper introduces a practical three-layer model (IT Evolution, Work Practices Evolution, and Mindset Evolution) as a roadmap for managing the complexities of organizational change.
- A one-size-fits-all approach fails; organizations must provide tailored support, tools, and training that cater to the diverse skill levels and starting points of all employees, especially lower-skilled workers.
- To ensure adoption, work processes and performance metrics must be strategically adapted to integrate new digital tools, rather than simply layering technology on top of old workflows.
- A cultural shift is fundamental; success depends on moving away from rigid hierarchies to a culture that empowers employees, encourages experimentation, and fosters a collective readiness for continuous change.
Digital Transformation, Organizational Change, Change Management, Shop Floor Digitalization, Three-Layer Model, Case Study, Dynamic Capabilities
Load More Showing 216 of 237