Promises and Perils of Generative AI in Cybersecurity
Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.
Problem
With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.
Outcome
- GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture. - Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education. - Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly. - A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset. - Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification. - Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a critical topic for every business leader: cybersecurity in the age of artificial intelligence. Host: We'll be discussing a fascinating study from the MIS Quarterly Executive, titled "Promises and Perils of Generative AI in Cybersecurity." Host: It explores how GenAI has become a tool for both attackers and defenders, creating a significant dilemma for IT executives. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The study summary mentions an 'AI arms race'. What is the core problem that business leaders are facing right now? Expert: The problem is that the game has fundamentally changed. For years, cyberattacks were something IT teams reacted to. But Generative AI has supercharged the attackers. Expert: Malicious actors are now using what the study calls 'black-hat GenAI' to create incredibly sophisticated, large-scale, and automated attacks that are faster and more convincing than anything we've seen before. Expert: Think of phishing emails that perfectly mimic your CEO's writing style, or malware that can change its own code in real-time to avoid detection. This technology makes it easy for even non-technical criminals to launch devastating attacks. Host: So, how did the researchers actually go about studying this fast-moving threat? Expert: They used a very practical approach. The study presents a detailed case study of a fictional insurance company, "Surine," that suffers one of these advanced attacks. Expert: But what's crucial is that this fictional story is based on real-life events and constructed from interviews with actual cybersecurity professionals and their clients. It’s not just theory; it’s a reflection of what’s happening in the real world. Host: That's a powerful way to illustrate the risk. So, after analyzing this case, what were the main findings? Expert: The first, and most important, is that GenAI is a double-edged sword. It’s an incredible weapon for attackers, but it's also an essential shield for defenders. This means companies can no longer afford to be reactive. They must be proactive. Host: What does being proactive look like in this context? Expert: It means adopting what the study calls a 'Defense in Depth' strategy. This isn't just about buying the latest security software. It’s a holistic approach that integrates technology, processes, and people. Host: And that people element seems critical. The study mentions that GenAI is making social engineering, like phishing attacks, much more dangerous. Expert: Absolutely. In the Surine case, the attackers used GenAI to craft a perfectly convincing email, supposedly from the CIO, complete with a deepfake video. It tricked employees into giving up their credentials. Expert: This is why the study emphasizes the need for a security-first culture and continuous employee education. We need to train our teams to have a healthy skepticism. Host: It sounds like fighting an AI-powered attacker requires an AI-powered defender. Expert: Precisely. The other key finding is the need to embrace proactive, AI-driven defense. The company in the study fought back using AI-powered 'honeypots'. Host: Honeypots? Can you explain what those are? Expert: Think of them as smart traps. They are decoy systems designed to look like valuable targets. A defensive AI uses them to lure the attacking AI, study its methods, and learn how to defeat it—all without putting real company data at risk. It’s literally fighting fire with fire. Host: This is all so fascinating. Alex, let’s bring it to our audience. What are the key takeaways for business leaders listening right now? Why does this matter to them? Expert: First, recognize that cybersecurity is no longer just an IT problem; it’s a core business risk. It requires a company-wide culture of security, championed from the C-suite down. Expert: Second, you must know what you're protecting. The study stresses the importance of robust data governance. Classify your data, understand its value, and focus your defenses on your most critical assets. Expert: Third, you have to shift from a reactive to a proactive mindset. This means investing in continuous training, running real-world attack simulations, and adopting a 'zero-trust' culture where every access attempt is verified. Expert: And finally, you have to leverage AI in your defense. In this new landscape, human teams alone can't keep up with the speed and scale of AI-driven attacks. You need AI to help anticipate and neutralize threats before they escalate. Host: So the message is clear: the threat has evolved, and so must our defense. Generative AI is both a powerful weapon and an essential shield. Host: Business leaders need a holistic, culture-first strategy and must be proactive, using AI to fight AI. Host: Alex Ian Sutherland, thank you for sharing these invaluable insights with us today. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
MIS Quarterly Executive (2025)
Successfully Mitigating AI Management Risks to Scale AI Globally
Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.
Problem
Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.
Outcome
- Missing or falsely evaluated potential AI use case opportunities. - Algorithmic training and data quality issues. - Task-specific system complexities. - Mismanagement of system stakeholders. - Threats from provider and system dependencies.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Today, we're diving into one of the biggest challenges facing businesses: how to move artificial intelligence from a small-scale experiment to a global, value-creating engine.
Host: We're exploring a new study titled "Successfully Mitigating AI Management Risks to Scale AI Globally." It's an in-depth look at the industrial pioneer Siemens AG to understand how companies can effectively scale AI systems, identifying the critical risks and providing practical recommendations. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: It's great to be here, Anna.
Host: Alex, the study opens with a pretty stark statistic: over 70% of AI projects fail to create a measurable business impact. Why is it so difficult for companies to get this right?
Expert: It's a huge problem. The study points out that modern AI, which is based on machine learning, is fundamentally different from traditional software. It's not programmed with rigid rules; it learns from data in a probabilistic way. This amplifies old technology management challenges and creates entirely new ones that most firms are simply unprepared to handle.
Host: So to understand how to succeed, the researchers took a closer look at a company that is succeeding. What was their approach?
Expert: They conducted an in-depth case study of Siemens. Siemens is an ideal subject because they're a global industrial leader that has been working with AI for over 50 years—from early expert systems in the 70s to the predictive and generative AI we see today. This long journey provides a rich, real-world playbook of what works and what doesn't when you're trying to scale.
Host: By studying a success story, we can learn what to do right. So, what were the main risks the study uncovered?
Expert: The researchers identified five critical risk categories. The first is missing or falsely evaluating potential AI opportunities. The field moves so fast that it’s hard to even know what's possible, let alone which ideas will actually create value.
Host: Okay, so just finding the right project is the first hurdle. What's next?
Expert: The second risk is all about data. Specifically, algorithmic training and data quality issues. Every business leader has heard the phrase "garbage in, garbage out," and for AI, this is make-or-break. The study emphasizes that high-quality data is a strategic resource, but it's often siloed away in different departments, incomplete, or biased.
Host: That makes sense. What's the third risk?
Expert: Task-specific system complexities. AI doesn't operate in a vacuum. It has to be integrated into existing, often messy, technological landscapes—hardware, cloud servers, enterprise software. Even a small change in the real world, like new lighting in a factory, can degrade an AI's performance if it isn't retrained.
Host: So it’s about the tech integration. What about the human side?
Expert: That's exactly the fourth risk: mismanagement of system stakeholders. This is about people. To succeed, you need buy-in from everyone—engineers, sales teams, customers, and even regulators. If people don't trust the AI or see it as a threatening "black box," the project is doomed to fail, no matter how good the technology is.
Host: And the final risk?
Expert: The fifth risk is threats from provider and system dependencies. This is essentially getting locked-in to a single external vendor for a critical AI model or service. It limits your flexibility, can be incredibly costly, and puts you at the mercy of another company's roadmap.
Host: Those are five very real business risks. So, Alex, for our listeners—the business leaders and managers—what are the key takeaways? How can they actually mitigate these risks?
Expert: The study provides some excellent, practical recommendations. To avoid missing opportunities, they suggest a "hub-and-spoke" model. Have a central AI team, but also empower decentralized teams in different business units to scout for use cases that solve their specific problems.
Host: So, democratize the innovation process. What about the data problem?
Expert: You have to treat data as a strategic asset. The key is to implement company-wide data-sharing principles to break down those silos. Siemens is creating a centralized data warehouse so their experts can find and use the data they need. And critically, they focus on owning and protecting their most valuable data sources.
Host: And for managing the complexity of these systems?
Expert: The recommendation is to build for modularity. Siemens uses what they call a "model zoo"—a library of reusable AI components. This way, you can update or swap out parts of a system without having to rebuild it from scratch. It makes the whole architecture more agile and future-proof.
Host: I like that idea of a 'model zoo'. Let's touch on the last two. How do you manage stakeholders and avoid being locked-in to a vendor?
Expert: For stakeholders, the advice is to integrate them into the development process step-by-step. Educate them through workshops and hands-on "playground" sessions to build trust. Siemens even cultivates internal "AI ambassadors" who champion the technology among their peers.
Expert: And to avoid dependency, the strategy is simple but powerful: dual-sourcing. For any critical AI project, partner with at least two comparable providers. This maintains competition, gives you leverage, and ensures you're never completely reliant on a single external company.
Host: Fantastic advice, Alex. So to summarize for our listeners: successfully scaling AI means systematically scouting for the right opportunities, treating your data as a core strategic asset, building for modularity and change, bringing your people along on the journey, and actively avoiding vendor lock-in.
Host: Alex Ian Sutherland, thank you so much for breaking down this crucial research for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights. Join us next time as we explore the future of work in the age of intelligent automation.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
MIS Quarterly Executive (2025)
Transforming Energy Management with an AI-Enabled Digital Twin
Hadi Ghanbari, Petter Nissinen
This paper reports on a case study of how one of Europe's largest district heating providers, called EnergyCo, implemented an AI-assisted digital twin to improve energy efficiency and sustainability. The study details the implementation process and its outcomes, providing six key recommendations for executives in other industries who are considering adopting digital twin technology.
Problem
Large-scale energy providers face significant challenges in managing complex district heating networks due to fluctuating energy prices, the shift to decentralized renewable energy sources, and operational inefficiencies from siloed departments. Traditional control systems lack the comprehensive, real-time view needed to optimize the entire network, leading to energy loss, higher costs, and difficulties in achieving sustainability goals.
Outcome
- The AI-enabled digital twin provided a comprehensive, real-time representation of the entire district heating network, replacing fragmented views from legacy systems. - It enabled advanced simulation and optimization, allowing the company to improve operational efficiency, manage fluctuating energy prices, and move toward its carbon neutrality goals. - The system facilitated scenario-based decision-making, helping operators forecast demand, optimize temperatures and pressures, and reduce heat loss. - The digital twin enhanced cross-departmental collaboration by providing a shared, holistic view of the network's operations. - It enabled a shift from reactive to proactive maintenance by using predictive insights to identify potential equipment failures before they occur, reducing costs and downtime.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating case study called "Transforming Energy Management with an AI-Enabled Digital Twin." It details how one of Europe's largest energy providers used this cutting-edge technology to completely overhaul its operations for better efficiency and sustainability. With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. Why would a massive energy company need a technology like an AI-enabled digital twin? What problem were they trying to solve?
Expert: Well, a company like EnergyCo, as it's called in the study, manages an incredibly complex district heating network. We're talking about over 2,800 kilometers of pipes. Their traditional control systems just couldn't keep up.
Host: What was making it so difficult?
Expert: It was a perfect storm of challenges. First, you have volatile energy prices. Second, they're shifting from a few big fossil-fuel plants to many smaller, decentralized renewable sources, which are less predictable. And internally, their departments were siloed. The production team, the network team, and the customer team all had different data and different priorities, leading to significant energy loss and higher costs.
Host: It sounds like they were flying with a dozen different dashboards but no single view of the cockpit. So what was the approach they took? What exactly is a digital twin?
Expert: In simple terms, a digital twin is a dynamic, virtual replica of a physical system. The key thing that distinguishes it from a simple digital model is that the data flow is automatic and two-way. It doesn't just receive real-time data from the physical network; it can be used to simulate changes and even send instructions back to optimize it.
Host: So it’s a living model, not a static blueprint. How did the study find this approach worked in practice for EnergyCo? What were the key outcomes?
Expert: The results were transformative. The first major finding was that the digital twin provided a single, comprehensive, real-time representation of the entire network. For the first time, everyone was looking at the same holistic picture.
Host: And what did that unified view enable them to do?
Expert: It unlocked advanced simulation and optimization. Operators could now run "what-if" scenarios. For example, they could accurately forecast demand based on weather data and then simulate the most cost-effective way to generate and distribute heat, drastically reducing energy loss and managing those fluctuating fuel prices.
Host: The study also mentions collaboration. How did it help there?
Expert: By breaking down the data silos, it naturally improved cross-departmental collaboration. When the production team could see how their decisions impacted network pressure miles away, they could make smarter, more coordinated choices. It created a shared operational language.
Host: That makes sense. And I was particularly interested in the shift from reactive to proactive maintenance.
Expert: Absolutely. Instead of waiting for a critical failure, the AI within the twin could analyze data to predict which components were under stress or likely to fail. This allowed EnergyCo to schedule maintenance proactively, which is far cheaper and less disruptive than emergency repairs.
Host: Alex, this is clearly a game-changer for the energy sector. But what’s the key takeaway for our listeners—the business leaders in manufacturing, logistics, or even retail? Why does this matter to them?
Expert: The most crucial lesson is about global versus local optimization. So many businesses try to improve one department at a time, but that can create bottlenecks elsewhere. A digital twin gives you a holistic view of your entire value chain, allowing you to make decisions that are best for the whole system, not just one part of it.
Host: So it’s a tool for breaking down those internal silos we see everywhere.
Expert: Exactly. The second key takeaway is that the human element is vital. The study shows that EnergyCo didn't just deploy the tech and replace people. They positioned it as a tool to support their operators, building trust and involving them in the process. Automation was gradual, which is critical for buy-in.
Host: That’s a powerful point about managing technological change. Any final takeaway for our audience?
Expert: Yes, the study highlights how this technology can become a foundation for new business models. EnergyCo is now exploring how to use the digital twin to give customers real-time data, turning them from passive consumers into active participants in energy management. For any business, this shows that operational tools can unlock future strategic growth.
Host: So, to summarize: an AI-enabled digital twin offers a holistic, real-time view of your operations, it breaks down silos to enable smarter decisions, and it can even pave the way for future innovation. It's about augmenting your people, not just automating processes.
Host: Alex Ian Sutherland, thank you so much for these brilliant insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more actionable intelligence from the world of research.
Digital Twin, Energy Management, District Heating, AI, Cyber-Physical Systems, Sustainability, Case Study
MIS Quarterly Executive (2024)
How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making
Philipp Staudt, Rainer Hoffmann
This paper presents a case study of a large German utility company's successful transition to a data-driven organization. It outlines the strategy, which involved three core transformations: enabling the workforce, improving the data lifecycle, and implementing employee-centered data management. The study provides actionable recommendations for industrial organizations facing similar challenges.
Problem
Many industrial companies, particularly in the utility sector, struggle to extract value from their data. The ongoing energy transition, with the rise of renewable energy sources and electric vehicles, has made traditional, heuristic-based decision-making obsolete, creating an urgent need for a robust corporate data culture to manage increasing complexity and ensure grid stability.
Outcome
- A data culture was successfully established through three intertwined transformations: enabling the workforce, improving the data lifecycle, and transitioning to employee-centered data management. - Enabling the workforce involved upskilling programs ('Data and AI Multipliers'), creating platforms for knowledge sharing, and clear communication to ensure widespread buy-in and engagement. - The data lifecycle was improved by establishing new data infrastructure for real-time data, creating a central data lake, and implementing a strong data governance framework with new roles like 'data officers' and 'data stewards'. - An employee-centric approach, featuring cross-functional teams, showcasing quick wins to demonstrate value, and transparent communication, was crucial for overcoming resistance and building trust. - The transformation resulted in the deployment of over 50 data-driven solutions that replaced outdated processes and improved decision-making in real-time operations, maintenance, and long-term planning.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we turn academic research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study titled, "How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making." Host: It explores how a large German utility company transformed itself into a data-driven organization. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Most companies know data is important, but this study focuses on a utility company. What was the specific problem they were trying to solve? Expert: It’s a problem many traditional industries are facing, but it's especially acute in the energy sector. They’re dealing with a massive shift—the rise of renewable energy like wind and solar, and the explosion in electric vehicle charging. Host: So the old ways of working just weren't cutting it anymore? Expert: Exactly. For decades, they relied on experience and simple tools. The study gives a great example of a "drag pointer"—basically a needle on a gauge that only showed the highest energy load a substation ever experienced. It didn't tell you when it happened, or why. Host: A single data point, with no context. Expert: Precisely. And that was fine when the grid was predictable. But suddenly, they went from handling a dozen requests for new EV chargers a month to nearly three thousand. The old "rule-of-thumb" approach became obsolete and even risky for grid stability. They were flying blind. Host: So how did the researchers get inside this transformation to understand how the company fixed this? Expert: They conducted a deep-dive case study, interviewing seven of the company’s key domain experts. These were the people on the front lines—the ones directly involved in building the new data strategy. This gave them a real ground-truth perspective on what actually worked. Host: So what were the key findings? What was the secret to their success? Expert: The study breaks it down into three core transformations that were all linked together. The first, and perhaps most important, was enabling the workforce. Host: This wasn't just about hiring a team of data scientists, then? Expert: Not at all. They created a program to train existing employees to become "Data and AI Multipliers." These were people from various departments who became data champions, identifying opportunities and helping their colleagues use new tools. It was about upskilling from within. Host: Building capability across the organization. What was the second transformation? Expert: Improving the data lifecycle. This sounds technical, but it’s really about fixing the plumbing. They moved from scattered, siloed databases to a central data lake, creating a single source of truth that everyone could access. Host: And I see they also created new roles like 'data officers' and 'data stewards'. Expert: Yes, and this is crucial. It made data quality a formal part of people's jobs. Instead of data being an abstract IT issue, specific people became accountable for its accuracy and maintenance within their business units. Host: That makes sense. But change is hard. How did they get everyone to embrace this new way of working? Expert: That brings us to the third piece: an employee-centered approach. They knew they couldn't just mandate this from the top down. They formed cross-functional teams, bringing engineers and data specialists together to solve real problems. Host: And they made a point of showcasing quick wins, right? Expert: Absolutely. This was key to building momentum. For example, they automated a critical report that used to take two employees a full month to compile, three times a year. Suddenly, that data was available in real-time. When people see that kind of tangible benefit, it overcomes resistance and builds trust in the process. Host: This is all fascinating for a utility company, but what's the key takeaway for a business leader in, say, manufacturing or retail? Why does this matter to them? Expert: The lessons are completely universal. First, you can't just buy technology; you have to invest in your people. The "Data Multiplier" model of empowering internal champions can work in any industry. Host: So, people first. What else? Expert: Second, make data quality an explicit responsibility. Creating roles like data stewards ensures accountability and treats data as the critical business asset it is. It stops being everyone's problem and no one's priority. Host: And the third lesson? Expert: Start small and demonstrate value fast. Don't try to boil the ocean. Find a painful, manual process, fix it with a data-driven solution, and then celebrate that "quick win." That success story becomes your best marketing tool for driving wider adoption. Ultimately, this company deployed over 50 new data solutions that transformed their operations. Host: A powerful example of real-world impact. So, to recap: the challenges of the energy transition forced this company to ditch its old methods. Their success came from a three-part strategy: empowering their workforce, rebuilding their data infrastructure, and using an employee-centric approach focused on quick wins. Host: Alex, thank you so much for breaking that down for us. It’s a brilliant roadmap for any company looking to build a true data culture. Expert: My pleasure, Anna. Host: And thank you to our listeners for joining us on A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
data culture, data-driven decision making, utility company, energy transition, change management, data governance, case study
MIS Quarterly Executive (2024)
Leveraging Information Systems for Environmental Sustainability and Business Value
Anne Ixmeier, Franziska Wagner, Johann Kranz
This study analyzes 31 articles from practitioner journals to understand how businesses can use Information Systems (IS) to enhance environmental sustainability. Based on a comprehensive literature review, the research provides five practical recommendations for managers to bridge the gap between sustainability goals and actual implementation, ultimately creating business value.
Problem
Many businesses face growing pressure to improve their environmental sustainability but struggle to translate sustainability initiatives into tangible business value. Managers are often unclear on how to effectively leverage information systems to achieve both environmental and financial goals, a challenge referred to as the 'sustainability implementation gap'.
Outcome
- Legitimize sustainability by using IS to create awareness and link environmental metrics to business value. - Optimize processes, products, and services by using IS to reduce environmental impact and improve eco-efficiency. - Internalize sustainability by integrating it into core business strategies and decision-making, informed by data from environmental management systems. - Standardize sustainability data by establishing robust data governance to ensure information is accessible, comparable, and transparent across the value chain. - Collaborate with external partners by using IS to build strategic partnerships and ecosystems that can collectively address complex sustainability challenges.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study titled "Leveraging Information Systems for Environmental Sustainability and Business Value." Host: It explores how companies can use their information systems, or IS, not just to meet sustainability goals, but to actually create tangible business value. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome to the show. Expert: Thanks for having me, Anna. It's a critical topic. Host: Absolutely. So, let's start with the big picture. What is the core problem this study is trying to solve for businesses? Expert: The central issue is something the researchers call the 'sustainability implementation gap'. Host: A gap? What does that mean? Expert: It means that while businesses are under immense pressure from customers, investors, and regulators to be more environmentally friendly, many managers are struggling. They don't have the tools or a clear roadmap to turn those sustainability initiatives into real business value, like cost savings or new revenue. Host: So they have the ambition, but not the execution plan. Expert: Exactly. They know sustainability is important, but they can't connect the dots between, say, reducing carbon emissions and improving their bottom line. This study aims to provide that practical roadmap. Host: So, how did the researchers go about creating this roadmap? What was their approach? Expert: Instead of building a purely theoretical model, they did something very practical. They conducted a comprehensive review of 31 articles from leading practitioner journals—publications that report on real-world business challenges and solutions. Host: So they looked at what's actually working in the field. Expert: Precisely. They analyzed a decade's worth of case studies and reports to find common patterns and best practices, specifically focusing on how information systems are being used successfully. Host: That sounds incredibly useful. Let's get to the findings. What were the key recommendations that came from this analysis? Expert: The study outlines a five-step pathway. The steps are: Legitimize, Optimize, Internalize, Standardize, and Collaborate. Together, they create a cycle for turning sustainability into value. Host: Okay, let's break that down. What does it mean to 'Legitimize' sustainability? Expert: It means making sustainability a real business priority, not just a PR exercise. Information systems are key here. They allow you to use analytical tools to connect environmental metrics, like energy consumption, directly to financial performance indicators. When you can show that reducing energy use saves a specific amount of money, sustainability becomes legitimized in the language of business. Host: You make a clear business case for it. Once that's done, what's the next step, 'Optimize'? Expert: Optimization is about using IS to improve the eco-efficiency of your processes, products, and services. A great example from the study is a consortium that piloted digital watermarks on packaging. These invisible codes help waste sorting facilities to recycle materials far more accurately, reducing waste and creating value from it. Host: That’s a brilliant, tangible example. So after legitimizing and optimizing, the next step is to 'Internalize'. How is that different? Expert: Internalizing means weaving sustainability into the very fabric of your corporate strategy. It's about using data from your environmental management systems to inform core business decisions, from project planning to investments. The study highlights how the chemical company BASF uses its management system to ensure environmental factors are a binding part of central strategic decisions. Host: It becomes part of the company's DNA. This brings us to the last two steps, which sound very connected: 'Standardize' and 'Collaborate'. Expert: They are absolutely connected. To collaborate effectively, you first need to standardize. This means establishing robust data governance so that sustainability information is consistent, comparable, and transparent. You can't work with your suppliers on reducing emissions if you're all measuring things differently. Host: A common language for data. Expert: Exactly. And once you have that, you can 'Collaborate'. No single company can solve major environmental challenges alone. IS allows you to build strategic partnerships and ecosystems. For instance, the study mentions a platform using blockchain to allow partners in a supply chain to securely share sustainability data without revealing sensitive trade secrets. This builds trust and enables collective action. Host: Alex, this is a very clear and powerful framework. If you had to distill this for a CEO or a manager listening right now, what is the single most important business takeaway? Expert: The key takeaway is to stop viewing sustainability as a cost or a compliance burden. Information systems provide the tools to reframe it as a driver of innovation and competitive advantage. By following this pathway, you can use data to uncover efficiencies, create more innovative and circular products, reduce risk in your supply chain, and ultimately build a more resilient and profitable business. It’s an iterative journey, not a one-time fix. Host: A journey from obligation to opportunity. Expert: That's the perfect way to put it. Host: To summarize for our listeners: businesses are struggling with a 'sustainability implementation gap'. This study provides a practical five-step pathway—Legitimize, Optimize, Internalize, Standardize, and Collaborate—showing how information systems can turn sustainability from an obligation into a core driver of business value. Host: Alex Ian Sutherland, thank you so much for translating this crucial research into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Information Systems, Environmental Sustainability, Green IS, Business Value, Corporate Strategy, Sustainability Implementation
MIS Quarterly Executive (2024)
How Audi Scales Artificial Intelligence in Manufacturing
André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
This paper presents a case study on how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality inspection in its manufacturing press shops. It analyzes Audi's four-year journey, from initial exploration to multi-site deployment, to identify key strategies and challenges. The study provides actionable recommendations for senior leaders aiming to capture business value by scaling AI innovations.
Problem
Many organizations struggle to move their AI initiatives from the pilot phase to full-scale operational use, failing to realize the technology's full economic potential. This is a particular challenge in manufacturing, where integrating AI with legacy systems and processes presents significant barriers. This study addresses how a company can overcome these challenges to successfully scale an AI solution and unlock long-term business value.
Outcome
- Audi successfully scaled an AI-based system to automate the detection of cracks in sheet metal parts, a crucial quality control step in its press shops. - The success was driven by a strategic four-stage approach: Exploring, Developing, Implementing, and Scaling, with a focus on designing for scalability from the outset. - Key success factors included creating a single, universal AI model for multiple deployments, leveraging data from various sources to improve the model, and integrating the solution into the broader Volkswagen Group's digital production platform to create synergies. - The study highlights the importance of decoupling value from cost, which Audi achieved by automating monitoring and deployment pipelines, thereby scaling operations without proportionally increasing expenses. - Recommendations for other businesses include making AI scaling a strategic priority, fostering collaboration between AI experts and domain specialists, and streamlining operations through automation and robust governance.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a challenge that trips up so many companies: taking artificial intelligence from a cool experiment to a large-scale business solution. Host: We're looking at a fascinating new study from MIS Quarterly Executive titled, "How Audi Scales Artificial Intelligence in Manufacturing." It's a deep dive into the carmaker's four-year journey to deploy an AI solution across multiple sites, offering some brilliant, actionable advice for senior leaders. Host: And to guide us through it, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The study summary mentions that many organizations struggle to get their AI projects out of the pilot phase. Can you paint a picture of this problem for us? Expert: Absolutely. It's often called "pilot purgatory." Companies build a successful AI proof-of-concept, but it never translates into real, widespread operational use. The study highlights that in 2019, only about 10% of automotive companies had implemented AI at scale. The gap between a pilot and an enterprise-grade system is massive. Host: And what was the specific problem Audi was trying to solve? Expert: They were focused on quality control in their press shops, where they stamp sheet metal into car parts like doors and hoods. A single press shop can produce over 3 million parts a year, and tiny, hard-to-see cracks can form in about one in every thousand parts. Finding these manually is slow and difficult, but missing them causes huge costs down the line. Host: So a perfect, high-stakes problem for AI to tackle. How did the researchers go about studying Audi's approach? Expert: They conducted an in-depth case study, tracking Audi's entire journey over four years. They analyzed how the company moved through four distinct stages: Exploring the initial idea, Developing the technology, Implementing it at the first site, and finally, Scaling it across the wider organization. Host: So what were the key findings? How did Audi escape that "pilot purgatory" you mentioned? Expert: There were a few critical factors. First, they designed for scale from the very beginning. It wasn't just about solving the problem for one press line; the goal was always a solution that could be rolled out to multiple factories. Host: That foresight seems crucial. What else? Expert: Second, and this is a key technical insight, they decided to build a single, universal AI model. Instead of creating a separate model for each press line or each car part, they built one core model and fed it image data from every deployment. This created a powerful network effect—the more data the model saw, the more accurate it became for everyone. Host: So the system gets smarter and more valuable as it scales. That's brilliant. Expert: Exactly. And third, they didn't build this in a vacuum. They integrated the AI solution into the larger Volkswagen Group's Digital Production Platform. This meant they could leverage existing infrastructure and align with the parent company's broader digital strategy, creating huge synergies. Host: It sounds like this was about much more than just a clever algorithm. So, Alex, this is the most important question for our listeners: Why does this matter for my business, even if I'm not in manufacturing? Expert: The lessons here are universal. The study boils them down into three key recommendations. First, make AI scaling a strategic priority. Don’t just fund isolated experiments. Focus on big, scalable business problems where AI can deliver substantial, long-term value. Host: Okay, be strategic. What's the second takeaway? Expert: Foster deep collaboration. This wasn’t just an IT project. Audi succeeded because their AI engineers worked hand-in-hand with the press shop experts on the factory floor. As one project leader put it, you have to involve the domain experts from day one to understand their pain points and create a shared sense of ownership. Host: So it's about people, not just technology. And the final lesson? Expert: Streamline operations through automation. Audi’s biggest win was what the study calls "decoupling value from cost." As they rolled the solution out to more sites, the value grew exponentially, but the costs stayed flat. They achieved this by automating the deployment and monitoring pipelines, so they didn't need to hire more engineers for each new factory. Host: That is the holy grail of scaling any technology. Alex, this has been incredibly insightful. Let's do a quick recap. Host: Many businesses get stuck in AI pilot mode. The case of Audi shows a way forward by following a strategic, four-stage approach. The key lessons for any business are to make scaling AI a core strategic goal, build cross-functional teams that pair tech experts with business experts, and automate your operations to ensure that value grows much faster than costs. Host: Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Inspection
MIS Quarterly Executive (2024)
The Promise and Perils of Low-Code AI Platforms
Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.
Problem
As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.
Outcome
- The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge. - Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first. - Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy. - Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a very timely topic for any business looking to innovate: the real-world challenges of adopting new technology. We’ll be discussing a fascinating study titled "The Promise and Perils of Low-Code AI Platforms." Host: This study looks at how four major corporations adopted a low-code conversational AI platform, and it uncovers some crucial, and often incorrect, assumptions that businesses make about these powerful tools. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are constantly hearing about AI and automation. What’s the core problem that these low-code AI platforms are supposed to solve? Expert: The problem is a classic one: a gap between ambition and resources. Companies want to automate processes, build chatbots, and leverage AI, but they often lack large teams of specialized AI developers. Low-code platforms are marketed as the perfect solution. Host: The 'democratization' of AI we hear so much about. Expert: Exactly. The promise is that you can use a simple, visual, drag-and-drop interface to build complex AI applications, empowering your existing business-focused employees to innovate without needing to write a single line of code. But as the study found, that promise often doesn't match the reality. Host: So how did the researchers investigate this gap between promise and reality? Expert: They took a very practical approach. They didn't just survey people; they conducted an in-depth case study. They followed the journey of four large multinational companies—in the energy, automotive, and retail sectors—as they all tried to implement the very same low-code conversational AI platform. Host: That’s great. So by studying the same platform across different industries, they could really pinpoint the common challenges. What were the main findings? Expert: The findings centered on three major false assumptions businesses made. The first was about usability. The assumption was that ‘low-code’ meant anyone could do it. Host: And that wasn't the case? Expert: Not at all. While the IT staff found it user-friendly, the business-side employees—the ones who were supposed to be empowered—faced a much steeper learning curve than anyone anticipated. One domain expert in the study described the experience as being "like Greek," saying it was far more complex than just "dragging and dropping." Host: So you still need a foundational level of technical knowledge. What was the second false assumption? Expert: It was about adaptability. The idea was that you could easily tailor these platforms to any specific business need. But creating applications to handle complex, real-world customer queries proved incredibly challenging and time-consuming. Host: Why was that? Expert: Because real business processes are often messy and rely on human intuition. The study found that before companies could automate a process, they first had to invest heavily in understanding and standardizing it. You can't teach an AI a process that isn't clearly defined. Host: That makes sense. You have to clean your house before you can automate the cleaning. What was the final key finding? Expert: This one is huge for any CIO: integration. The belief was that these platforms would be a simple 'plug-and-play' solution that could easily connect to existing company databases and systems. Host: I have a feeling it wasn't that simple. Expert: Far from it. The companies ran into major roadblocks trying to connect the platform to their legacy systems. They faced incompatible data formats and a lack of a unified data strategy. The study showed that you often need someone with knowledge of coding and APIs to build the bridges between the new platform and the old systems. Host: So, Alex, this is the crucial part for our listeners. If a business leader is considering a low-code AI tool, what are the key takeaways? What should they do differently? Expert: The study provides a clear roadmap. First, thoroughly test the platform before you buy it. Don't just watch the vendor's demo. Have your actual employees—the business users—try to build a real-world application with it. This will reveal the true learning curve. Host: A 'try before you buy' approach. What else? Expert: Second, success requires cross-functional collaboration. It’s not an IT project or a business project; it's both. The study highlighted that the most successful implementations happened when IT experts and business domain experts worked together in blended teams from day one. Host: So break down those internal silos. Expert: Absolutely. And finally, be prepared to change your processes, not just your tools. You can't just layer AI on top of existing workflows. You need to re-evaluate and often redesign your processes to align with the capabilities of the AI. It's as much about business process re-engineering as it is about technology. Host: This is incredibly insightful. It seems low-code AI platforms are powerful, but they are certainly not a magic bullet. Host: To sum it up: the promise of simplicity with these platforms often hides significant challenges in usability, adaptation, and integration. Success depends less on the drag-and-drop interface and more on a strategic approach that involves rigorous testing, deep collaboration between teams, and a willingness to rethink your fundamental business processes. Host: Alex, thank you so much for shedding light on the perils, and the real promise, of these platforms. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning into A.I.S. Insights. We’ll see you next time.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
MIS Quarterly Executive (2024)
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations
Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.
Problem
A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.
Outcome
- Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management. - These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'. - The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions. - Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations." Host: It explores how employees at a warehouse in Hong Kong used everyday tools, like Microsoft Excel, to create unofficial but essential solutions when their official corporate software fell short. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What was the real-world problem this study looked into? Expert: It’s a classic story of a global headquarters rolling out a one-size-fits-all solution. The company, called CoreRidge in the study, implemented a standardized corporate software, Microsoft Dynamics. Expert: The problem was, this system was completely non-customizable. It worked fine in most places, but it was a disaster for their Hong Kong operations. Host: A disaster how? What was so unique about Hong Kong? Expert: In Hong Kong, due to the high cost of real estate, the company has small retail stores and one large, central warehouse. The corporate software was designed for locations where the warehouse and store are together. Expert: It simply couldn't handle the complex delivery scheduling needed to get products from that single warehouse to all the different stores and customers. Core tasks were impossible to perform with the official system. Host: So employees were stuck. How did the researchers figure out what was happening? Expert: They went right to the source. It was a qualitative case study where they conducted in-depth interviews with 31 employees at the warehouse, from trainees all the way up to senior management. This gave them a ground-level view of how the team was actually getting work done. Host: And that brings us to the findings. What did they discover? Expert: They found that employees had essentially turned Microsoft Excel into their own low-code development tool. They were downloading data from the official system and using Excel to manage everything from delivery lists to rescheduling shipments during a typhoon. Host: So they built their own system, in a way. Expert: Exactly. And this wasn't a secret, rogue operation. These Excel workarounds became standard operating procedure. They were noncompliant with corporate IT policy, but they were absolutely vital for daily operations and customer satisfaction. The study calls this 'shadow IT', but frames it as a valuable, employee-driven innovation. Host: That’s a really interesting perspective. It sounds like the company should be celebrating these employees, not punishing them. Expert: That’s the core argument. The study suggests that this kind of informal, tool-based problem-solving is a legitimate form of low-code development. It’s not always about using a fancy, dedicated platform. Sometimes the best tool is the one your team already knows how to use. Host: This is the crucial part for our listeners. What are the key business takeaways here? Why does this matter? Expert: It matters immensely. First, it shows that managers need to recognize and support these informal solutions, not just shut them down. These workarounds are a goldmine of information about what's not working in your official systems. Host: So, don't fight 'shadow IT', but try to understand it? Expert: Precisely. The second major takeaway is that businesses should adopt a "portfolio approach" to low-code development. Don't just invest in one big platform. Empower your employees by recognizing the value of flexible, everyday tools like Excel. Expert: It’s about creating a governance structure that can embrace these informal solutions, manage their risks, and learn from them to make the whole organization smarter and more agile. Host: It sounds like a shift from rigid, top-down control to a more flexible, collaborative approach to technology. Expert: That's it exactly. It's about trusting your employees on the front lines to solve the problems they face every day, with the tools they have at hand. Host: So, to summarize: a rigid corporate system can fail to meet local needs, but resourceful employees can bridge the gap using everyday tools like Excel. And the big lesson for businesses is to recognize, govern, and learn from these informal innovations rather than just trying to eliminate them. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world, powered by Living Knowledge.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
MIS Quarterly Executive (2024)
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant
Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.
Problem
Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.
Outcome
- Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources. - Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns. - Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation. - Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success. - The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study called "How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant". Host: It explores how a medium-sized company built its first AI product using a low-code platform, and what that journey reveals about the strategic trade-offs of this popular approach. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. What's the real-world problem this study is tackling? Expert: The problem is something many businesses, especially small and medium-sized enterprises or SMEs, are facing. They know they need to adopt AI to stay competitive, but they often lack the massive budgets or specialized teams of data scientists and AI engineers to build solutions from scratch. Host: And I imagine off-the-shelf products can be too restrictive? Expert: Exactly. They’re often not a perfect fit. Low-code platforms promise a middle ground—a way to "democratize" AI development. But there's been a gap in understanding what really happens when a company takes this path. This study fills that gap. Host: So how did the researchers approach this? What did they do? Expert: They conducted an in-depth case study. They followed a German software provider, GuideCom, for over 16 months as they developed their first AI product—a smart assistant for HR services—using a low-code platform called Cognigy.AI. Host: It sounds like they had a front-row seat to the entire process. So, what were the key findings? Did the low-code platform live up to the hype? Expert: It was a story of enablers and constraints. On the positive side, the platform absolutely enabled AI development. Its visual, drag-and-drop interface dramatically reduced complexity. Host: How did that help in practice? Expert: It was crucial for fostering collaboration. Suddenly, the business experts from the HR department could work directly with the IT developers. They could see the logic, understand the process, and contribute meaningfully, which is often a huge challenge in tech projects. It also saved a significant amount of resources. Host: That sounds fantastic. But you also mentioned constraints. What were the challenges? Expert: The constraints were very real. The first was architectural integration. Getting the AI tool, built on an external platform, to work smoothly with GuideCom’s existing software suite was a major hurdle. Host: And what else? Expert: Security and expandability. They needed to ensure the client’s data was secure, and they wanted the product to be scalable for many different clients, each with unique needs. The platform had limitations that made this complex. Host: So 'low-code' doesn't mean 'no-skills needed'? Expert: That's perhaps the most critical finding. GuideCom's existing software development skills were absolutely essential. They had to write custom code and re-engineer parts of the solution to overcome the platform's limitations and meet their security and integration needs. The promise of 'no-code' wasn't the reality. Host: This brings us to the most important question for our listeners: why does this matter for business? What are the practical takeaways? Expert: The biggest takeaway is that adopting a low-code AI platform is a strategic trade-off, not a magic bullet. It brilliantly lowers the barrier to entry, allowing companies to start innovating with AI without a massive upfront investment. That’s a game-changer. Host: But there's a 'but'. Expert: Yes. But you must manage the trade-offs. Firstly, you become dependent on the platform provider, so you need to choose your partner carefully. Secondly, you cannot neglect in-house technical skills. You still need people who can code to handle customization and integration. Host: The study also mentioned the importance of partnerships, didn't it? Expert: It was a crucial factor for success. GuideCom built a strong knowledge network. They had a close relationship with the platform provider, Cognigy, for technical support, and they partnered with a major bank as their first client. This client provided invaluable domain expertise and real-world data to train the AI. Host: A powerful combination of technical and business partners. Expert: Precisely. You need both to succeed. Host: This has been incredibly insightful. So to summarize for our listeners: Low-code platforms can be a powerful gateway for companies to start building AI solutions, as they reduce complexity and foster collaboration. Host: However, it's a strategic trade-off. Businesses must be prepared for challenges with integration and security, retain in-house software skills for customization, and build a strong network with both the platform provider and innovation partners. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME
MIS Quarterly (2025)
EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH
Abdul Sesay, Elena Karahanna, and Marie-Claude Boudreau
This study investigates how the effects of new technology, specifically body-worn cameras (BWCs), unfold within organizations over time. Using a multi-site case study of three U.S. police departments, the research develops a process model to explain how the consequences of IT implementation emerge. The study identifies three key phases in this process: individuation (selecting the technology and related policies), composition (combining the technology with users), and actualization (using the technology in real-world interactions).
Problem
When organizations implement new technology, the results are often unpredictable, with outcomes varying widely between different settings. Existing research has not fully explained why a technology can be successful in one organization but fail in another. This study addresses the gap in understanding how the consequences of a new technology, like police body-worn cameras, actually develop and evolve into established organizational practices.
Outcome
- The process through which technology creates new behaviors and practices is complex and non-linear, occurring in three distinct phases (individuation, composition, and actualization). - Successful implementation is not guaranteed; it depends on the careful alignment of the technology itself (material components) with policies, training, and user adoption (expressive components) at each stage. - The study found that of the three police departments, only one successfully implemented body cameras because it carefully selected high-quality equipment, developed specific policies for its use, and ensured officers were trained and held accountable. - The other two departments experienced failure or delays due to poor quality equipment, generic policies, and inconsistent use, which prevented new, positive practices from taking hold. - The model shows that outcomes emerge over time and may require continuous adjustments, demonstrating that success is an ongoing process, not a one-time event.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating question that plagues nearly every organization: why do some technology projects succeed while others fail? With me is our expert analyst, Alex Ian Sutherland, who has been looking into a study on this very topic. Host: Alex, welcome to the show. Expert: Great to be here, Anna. Host: The study we're discussing is titled, "EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH." Can you start by telling us what it's all about? Expert: Absolutely. In simple terms, this study investigates how the real-world effects of a new technology unfold over time. It uses the rollout of body-worn cameras in three different U.S. police departments to create a model that explains how you get from just buying a new gadget to it actually changing how people work. Host: And this is a huge issue for businesses. You invest millions in a new system, and the results can be completely unpredictable. Expert: That's the core problem the study addresses. Why can the exact same technology be a game-changer in one organization but a total flop in the one next door? Existing theories haven’t fully explained this variation. The researchers wanted to understand the step-by-step process of how the consequences of new tech, whether good or bad, actually emerge. Host: So how did they go about studying this? What was their approach? Expert: They conducted a multi-site case study, deeply embedding themselves in three different police departments—a large urban one, a mid-sized suburban one, and a small-town one. Instead of just looking at the technology itself, they looked at how it was combined with policies, training, and the officers who had to use it every day. Host: It sounds like they were looking at the entire ecosystem, not just the device. So, what were the key findings? Expert: The study found that the process happens in three distinct phases. The first is what they call ‘individuation’. This is the selection phase—choosing the right cameras and, just as importantly, writing the specific policies for how they should be used. Host: Okay, so the planning and purchasing stage. What's next? Expert: Next is ‘composition’. This is where the tech meets the user. It's about physically combining the camera with the officer, providing training, and making sure the two can function together seamlessly. It’s about building a new combined unit: the officer-with-a-camera. Host: And the final phase? Expert: That’s ‘actualization’. This is when the technology is used in real-world situations, during interactions with the public. This is where new behaviors, like improved communication or more consistent evidence gathering, either become routine and successful, or the whole thing falls apart. Host: And did they see different outcomes across the three police departments? Expert: Dramatically different. Only one department truly succeeded. They carefully selected high-quality equipment after a pilot program, developed very specific policies with stakeholder input, and had strict training and accountability. The other two departments failed or faced major delays. Host: Why did they fail? Expert: For predictable reasons, in hindsight. One used subpar, unreliable cameras that often malfunctioned. Both used generic policies that weren't tailored to body cameras at all. In one case, the policy didn't even mention body cameras. This misalignment between the technology and the rules meant that positive new practices never took hold. Host: This is the crucial part, Alex. What does a study about police body cameras mean for a business leader rolling out a new CRM, an AI tool, or any other major tech platform? Expert: It means everything. The first big takeaway is that successful implementation is a process, not a purchase. You can't just buy the "best" software and expect magic. You have to manage each phase. Host: And what about that link between the tech and the policies? Expert: That’s the second key takeaway. You must align what the study calls the ‘material components’—the tech itself—with the ‘expressive components,’ which are your policies, training, and culture. A new sales tool is useless if the sales team isn't trained on it or if compensation plans don't encourage its use. The technology and the human systems must be designed together. Host: So it's a continuous process of alignment. Expert: Exactly, which leads to the third point: success is not a one-time event. The study's model shows that outcomes emerge over time and often require tweaks and course correction. The departments that failed couldn't adapt to the problems of poor equipment or bad policy. A successful business needs to build in feedback loops to learn and adjust as they go. Host: So to summarize: implementing new technology isn't about the tech alone. It's a complex, multi-phase process that requires a deep alignment between the tools you choose and the rules, training, and people who use them. And you have to be ready to adapt along the way. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world.
IT implementation, Assemblage theory, body-worn camera, organizational change, police technology, process model
Communications of the Association for Information Systems (2024)
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.
Problem
Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.
Outcome
- The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features. - The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer. - Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment. - Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research to real-world business strategy, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re exploring a topic that’s becoming increasingly relevant in our AI-driven world: how to make our digital tools not just smarter, but more supportive. We’re diving into a study titled "Design Knowledge for Virtual Learning Companions from a Value-centered Perspective".
Host: In simple terms, it's about creating AI-powered chatbots that act as true companions, helping students with the very human challenges of motivation and time management. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a fascinating study with huge implications.
Host: Let's start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: Well, think about anyone trying to learn something new while juggling a job and a personal life. It could be a university student working part-time or an employee trying to upskill. The biggest hurdles often aren't the course materials themselves, but staying motivated and managing time effectively.
Host: That’s a struggle many of our listeners can probably relate to.
Expert: Exactly. And while we have powerful AI tools like ChatGPT that can answer questions, they function like a know-it-all tutor. They provide information, but they don't provide companionship. They don't check in on you, encourage you when you're struggling, or help you plan your week. This study addresses that gap.
Host: So it's about making AI more of a partner than just a tool. How did the researchers go about figuring out how to build something like that?
Expert: They used a very hands-on approach called design science research. Instead of just theorizing, they went through multiple cycles of building and testing. They started by conducting in-depth interviews with working students to understand their real needs. Then, they held workshops, designed a couple of conceptual prototypes, and eventually built and coded a fully functional AI companion that they tested with different student groups.
Host: So it’s a methodology that’s really grounded in user feedback. What were the key findings? What did they learn from all this?
Expert: The main outcome is a powerful framework for designing these Virtual Learning Companions, or VLCs. The big idea is that the companion's value is created through the interaction itself, which they break down into three distinct but connected layers.
Host: Three layers. Can you walk us through them?
Expert: Of course. First is the Relationship Layer. This is all about creating a human-like, trustworthy companion. The AI should be able to show empathy, maybe use a bit of humor, and build a sense of connection with the user over time. It’s the foundation.
Host: Okay, so it’s about the personality and the bond. What's next?
Expert: The second is the Matching Layer. This is about adaptation and personalization. The study found that a one-size-fits-all approach fails. The VLC needs to adapt to the user's individual learning style, their personality, and even their current mood or context.
Host: And the third layer?
Expert: That's the Service Layer. This is where the more functional support comes in. It includes features for time management, like creating to-do lists and setting reminders, as well as providing supportive learning content and creating a motivational environment, perhaps with gentle nudges or rewards.
Host: This all sounds great in theory, but did they see it work in practice?
Expert: They did, and they also uncovered a critical insight. When they tested their prototype, they found that full-time university students thought the AI’s language was too informal and colloquial. But a group of working professionals in a continuing education program found the exact same AI to be too formal!
Host: Wow, that’s a direct confirmation of what you said about the Matching Layer. The companion has to be adaptable.
Expert: Precisely. It proves that to be effective, these tools must be tailored to their specific audience and context.
Host: Alex, this is the crucial part for our audience. Why does this matter for business? What are the practical takeaways?
Expert: The implications are huge, Anna, and they go way beyond the classroom. Think about corporate training and HR. Imagine a new employee getting an AI companion that doesn't just teach them software systems, but helps them manage the stress of their first month and checks in on their progress and motivation. That could have a massive impact on engagement and retention.
Host: I can see that. It’s a much more holistic approach to onboarding. Where else?
Expert: For any EdTech company, this framework is a blueprint for building more effective and engaging products. It's about moving from simple content delivery to creating a supportive learning ecosystem. But you can also apply these principles to customer-facing bots. An AI that can build a relationship and adapt to a customer's technical skill or frustration level will provide far better service and build long-term loyalty.
Host: So the key business takeaway is to shift our thinking.
Expert: Exactly. The value of AI in these roles isn't just in the functional task it completes, but in the supportive, adaptive relationship it builds with the user. It’s the difference between an automated tool and a true digital partner.
Host: A fantastic insight. So, to summarize: today's professionals face real challenges with motivation and time management. This study gives us a three-layer framework—Relationship, Matching, and Service—to build AI companions that truly help. For businesses, this opens up new possibilities in corporate training, EdTech, and even customer relations.
Host: Alex, thank you so much for translating this complex study into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in. This has been A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable knowledge for your business.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value