International Conference on Wirtschaftsinformatik (2025)
Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport
Markus Ewert and Martin Bichler
This study proposes a new method for analyzing auction data to understand bidders' private valuations. It extends an existing framework by reformulating the estimation challenge as an optimal transport problem, which avoids the statistical limitations of traditional techniques. This novel approach uses a proxy equilibrium model to analytically evaluate bid distributions, leading to more accurate and robust estimations.
Problem
Designing profitable auctions, such as setting an optimal reserve price, requires knowing how much bidders are truly willing to pay, but this information is hidden. Existing methods to estimate these valuations from observed bids often suffer from statistical biases and inaccuracies, especially with limited data, leading to poor auction design and lost revenue for sellers.
Outcome
- The proposed optimal transport-based estimator consistently outperforms established kernel-based techniques, showing significantly lower error in estimating true bidder valuations. - The new method is more robust, providing accurate estimates even in scenarios with high variance in bidding behavior where traditional methods fail. - In practical tests, reserve prices set using the new method's estimates led to significant revenue gains for the auctioneer, while prices derived from older methods resulted in zero revenue.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating study called “Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport.”
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, this sounds quite technical, but at its heart, it’s about understanding what people are truly willing to pay for something. Is that right?
Expert: That’s a perfect way to put it, Anna. The study introduces a new, more accurate method for analyzing auction data to uncover bidders' hidden, private valuations. It uses a powerful mathematical concept called 'optimal transport' to get around the limitations of older techniques.
Host: So, let’s start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: The problem is a classic one for any business that uses auctions. Think of a company selling online ad space, or a government auctioning off broadcast licenses. To maximize their revenue, they need to design the auction perfectly, for instance by setting an optimal reserve price—the minimum bid they'll accept.
Host: But to do that, you'd need to know the highest price each bidder is secretly willing to pay.
Expert: Exactly, and that information is hidden. You only see the bids they actually make. For decades, analysts have used statistical methods to try and estimate those true valuations from the bids, but those methods have serious flaws.
Host: Flaws like what?
Expert: They often require huge amounts of clean data to be accurate, which is rare in the real world. With smaller or messier datasets, these traditional methods can produce biased and inaccurate estimates. This leads to poor auction design, like setting a reserve price that's either too low, leaving money on the table, or too high, scaring away all the bidders. Either way, the seller loses revenue.
Host: So how does this new approach avoid those pitfalls? What is 'optimal transport'?
Expert: Imagine you have the bids you've observed in one pile. And over here, you have a theoretical model of how rational bidders would behave. Optimal transport is essentially a mathematical tool for finding the most efficient way to 'move' the pile of observed bids to perfectly match the shape of the theoretical model.
Host: Like finding the shortest path to connect the data you have with the theory?
Expert: Precisely. By calculating that 'path' or 'transport map', the researchers can analytically determine the underlying valuations with much greater precision. It avoids the statistical guesswork of older methods, which are often sensitive to noise and small sample sizes. It’s a more direct and robust way to get to the truth.
Host: It sounds elegant. So, what were the key findings when they put this new method to the test?
Expert: The results were quite dramatic. First, the optimal transport method was consistently more accurate. It produced estimates of bidder valuations with significantly lower error compared to the established techniques.
Host: And was it more reliable with the 'messy' data you mentioned?
Expert: Yes, and this is a crucial point. It proved to be far more robust. In experiments with high variance in bidding behavior—scenarios where the older methods completely failed—this new approach still delivered accurate estimates. It can handle the unpredictability of real-world bidding.
Host: That all sounds great in theory, but does it actually lead to better business outcomes?
Expert: It does, and this was the most compelling finding. The researchers simulated setting a reserve price based on the estimates from their new method versus the old ones. The reserve price set using the new method led to significant revenue gains for the seller.
Host: And the old methods?
Expert: In the same test, the prices derived from the older methods were so inaccurate they led to zero revenue. The estimated reserve price was so high that it was predicted no one would bid at all. It’s a stark difference—going from zero revenue to a significant increase.
Host: That really brings it home. So, for the business leaders listening, what are the practical takeaways here? Why does this matter for them?
Expert: The most direct application is for any business involved in auctions. If you're in ad-tech, government procurement, or even selling assets, this is a tool to fundamentally improve your pricing strategy and increase your revenue. It allows you to make data-driven decisions with much more confidence.
Host: And beyond just setting a reserve price?
Expert: Absolutely. At a higher level, this is about getting a truer understanding of your market's demand and what your customers really value. That insight is gold. It can inform not just auction design, but broader product pricing, negotiation tactics, and strategic planning. It helps reduce the risk of mispricing, which is a major source of lost profit.
Host: Fantastic. So, to summarize: for any business running auctions, knowing what a bidder is truly willing to pay is the key to maximizing profit, but that information is hidden.
Host: This study provides a powerful new method using optimal transport to uncover those hidden values far more accurately and reliably than before. And as we've heard, the difference can be between earning zero revenue and earning a significant profit.
Host: Alex, thank you so much for breaking down this complex topic into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign
Ribka Devina Margaretha, Mahendrawathi ER, Sugianto Halim
This study addresses challenges in PT SEVIMA's customer onboarding process, where Account Managers (AMs) were not always aligned with client needs. Using a Business Process Management (BPM) Lifecycle approach combined with heuristic principles (Resequencing, Specialize, Control Addition, and Empower), the research redesigns the existing workflow. The goal is to improve the matching of AMs to clients, thereby increasing onboarding efficiency and customer satisfaction.
Problem
PT SEVIMA, an IT startup for the education sector, struggled with an inefficient customer onboarding process. The primary issue was the frequent mismatch between the assigned Account Manager's skills and the specific, technical needs of the new client, leading to implementation delays and decreased satisfaction.
Outcome
- Recommends grouping Account Managers (AMs) based on specialization profiles built from post-project evaluations. - Suggests moving the initial client needs survey to occur before an AM is assigned to ensure a better match. - Proposes involving the technical migration team earlier in the process to align strategies from the start. - These improvements aim to enhance onboarding efficiency, reduce rework, and ultimately increase client satisfaction.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. In today's fast-paced business world, how you welcome a new customer can make or break the entire relationship. Today, we're diving into a study that tackles this very challenge.
Host: It’s titled, "Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign". It explores how an IT startup, PT SEVIMA, redesigned their customer onboarding process to better match their account managers to client needs, boosting both efficiency and satisfaction. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. What was the core problem that PT SEVIMA was trying to solve?
Expert: It's a classic startup growing pain. PT SEVIMA provides software for the education sector. Their success hinges on getting new university clients set up smoothly. But they had a major bottleneck: they were assigning Account Managers, or AMs, to new clients without a deep understanding of the client's specific technical needs.
Host: So it was a mismatch of skills?
Expert: Exactly. You might have an AM who is brilliant with financial systems assigned to a client whose main challenge is student registration. The study's analysis, using tools like a fishbone diagram, showed this created a domino effect: implementation delays, frustrated clients, and a lot of rework for the internal teams. It was inefficient and hurting customer relationships right from the start.
Host: It sounds like a problem many companies could face. So, how did the researchers approach fixing this?
Expert: They used a structured method called Business Process Management, but combined it with something called heuristic principles. It sounds technical, but it's really about applying practical, proven rules of thumb to improve a workflow. Think of it as a toolkit of smart solutions.
Host: Can you give us an example of one of those "smart solutions"?
Expert: Absolutely. The four key principles they used were Resequencing, Specialization, Control Addition, and Empower. Resequencing, for instance, just means changing the order of steps. They found that one simple change could have a huge impact.
Host: I'm intrigued. What were the key findings or recommendations that came out of this approach?
Expert: There were three game-changers. First, using that Resequencing principle, they recommended moving the initial client needs survey to happen *before* an Account Manager is assigned. Get a deep understanding of the client's needs first, then pick the right person for the job.
Host: That seems so logical, yet it’s a step that's often overlooked. What was the second finding?
Expert: That was about Specialization. The study proposed grouping AMs into specialist profiles based on their skills and performance on past projects. After each project, AMs are evaluated on their expertise in areas like data management or academic systems. This creates a clear profile of who is good at what.
Host: So you’re not just assigning the next available person, you’re matching a specialist to a specific problem.
Expert: Precisely. And the third key recommendation was about Empowerment. They suggested involving the technical migration team much earlier in the process. Instead of the AM handing down instructions, the tech team is part of the initial strategy session, which helps them anticipate problems and align on the best approach from day one.
Host: This all sounds incredibly practical. Let's shift to the big question for our listeners: why does this matter for their businesses, even if they aren't in educational tech?
Expert: This is the most crucial part. These findings offer universal lessons for any business. First, it proves that customer onboarding is a strategic process, not just an administrative checklist. A smooth start builds trust and dramatically improves long-term retention.
Host: What's the second big takeaway?
Expert: Don't just assign people, *match* them. The idea of creating specialization profiles is powerful. Every manager should know their team's unique strengths and align them with the right tasks or clients. It reduces errors, builds employee confidence, and delivers better results for the customer.
Host: It’s about putting your players in the right positions on the field.
Expert: Exactly. And finally, front-load your discovery process. The study showed that the simple act of moving a survey to the beginning of the process prevents misunderstandings and costly rework. Take the time to understand your customer's reality deeply before you start building or implementing a solution. It’s about being proactive, not reactive.
Host: Fantastic insights, Alex. So, to recap for our listeners: a smarter onboarding process comes from matching the right expertise to the client, understanding their needs deeply before you begin, and empowering your technical teams by bringing them in early.
Host: Alex Ian Sutherland, thank you so much for translating this study into such clear, actionable advice.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable lessons from the world of business and technology research.
Business Process Redesign, Customer Onboarding, Knowledge-Intensive Process, Heuristics Method, Startup, BPM Lifecycle
MIS Quarterly Executive (2023)
How Boards of Directors Govern Artificial Intelligence
Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.
Problem
Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.
Outcome
- Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence. - Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings. - Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions. - To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges. - Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical topic for every company leader: governance. Specifically, we're looking at a fascinating new study titled "How Boards of Directors Govern Artificial Intelligence."
Host: It investigates how corporate boards oversee and integrate AI into their governance practices, based on interviews with high-profile board members. Here to break it all down for us is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: Let's start with the big picture. We hear a lot about AI's potential, but what's the real-world problem this study is trying to solve for boards?
Expert: The problem is a major governance gap. The study points out that while AI is completely reshaping the business landscape, most corporate boards are struggling to understand it. They have a fiduciary duty to oversee strategy, risk, and major investments, but AI often isn't even a mainstream topic in the boardroom.
Host: So, management might be racing ahead with AI, but the board, the ultimate oversight body, is being left behind?
Expert: Exactly. And that's risky. AI requires huge, often uncertain, capital investments. It also introduces entirely new legal, ethical, and reputational risks that many boards are simply not equipped to handle. This gap between the technology's impact and the board's understanding is what the study addresses.
Host: How did the researchers get inside the boardroom to understand this dynamic? What was their approach?
Expert: They went straight to the source. The research is based on a series of in-depth, confidential interviews with sixteen high-profile board members from a huge range of industries—from tech and finance to healthcare and manufacturing. They also spoke with executive search firms to understand what companies are looking for in new directors.
Host: So, based on those conversations, what were the key findings? What are the big themes boards need to be thinking about?
Expert: The study organized the challenges into four key groups. The first is Strategy and Firm Competitiveness. Boards need to ensure AI is actually integrated into the company’s core strategy, not just a flashy side project.
Host: Meaning they should be asking how AI will help the company win in the market?
Expert: Precisely. The second is Capital Allocation. This is about more than just signing checks. It's about encouraging experimentation—what the study calls ‘lighthouse projects’—and making strategic investments in foundational capabilities, like data platforms, that will pay off in the long run.
Host: That makes sense. What's the third group?
Expert: AI Risks. This is a big one. We're not just talking about a system crashing. Boards need to oversee ethical risks, like algorithmic bias, and major reputational and legal risks. The recommendation is to integrate these new AI-specific risks directly into the company’s existing Enterprise Risk Management framework.
Host: And the final one?
Expert: It's called Technology Competence. And this is crucial—it applies to the board itself.
Host: Does that mean every board director needs to become a data scientist?
Expert: Not at all. It’s about developing AI literacy—understanding the business implications. The study found that leading boards are actively reviewing their composition to ensure they have relevant expertise and, importantly, they're including AI competency in CEO and executive succession planning.
Host: That brings us to the most important question, Alex. For the business leaders and board members listening, why does this matter? What is the key takeaway they can apply tomorrow?
Expert: The most powerful and immediate thing a board can do is start asking the right questions. The board's role isn't necessarily to have all the answers, but to guide the conversation and ensure management is thinking through the critical issues.
Host: Can you give us an example of a question a director should be asking?
Expert: Certainly. For strategy, they could ask: "How are our competitors using AI, and how does our approach give us a competitive advantage?" On risk, they might ask: "What is our framework for evaluating the ethical risks of a new AI system before it's deployed?" These questions signal the board's priorities and drive accountability.
Host: So, the first step is simply opening the dialogue.
Expert: Yes. That's the catalyst. The study makes it clear that in many companies, if the board doesn't start the conversation on AI governance, no one will.
Host: A powerful call to action. To summarize: this study shows that boards have a critical and urgent role in governing AI. They need to focus on four key areas: weaving AI into strategy, allocating capital wisely, managing new and complex risks, and building their own technological competence.
Host: And the journey begins with asking the right questions. Alex Ian Sutherland, thank you for these fantastic insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
MIS Quarterly Executive (2023)
Evolution of the Metaverse
Mary Lacity, Jeffrey K. Mullins, Le Kuai
This paper explores the potential opportunities and risks of the emerging metaverse for business and society through an interview format with leading researchers. The study analyzes the current state of metaverse technologies, their potential business applications, and critical considerations for governance and ethical implementation for IT practitioners.
Problem
Following renewed corporate interest and massive investment, the concept of the metaverse has generated significant hype, but businesses lack clarity on its definition, tangible value, and long-term impact. This creates uncertainty for leaders about how to approach the technology, differentiate it from past virtual worlds, and navigate the significant risks of surveillance, data privacy, and governance.
Outcome
- The business value of the metaverse centers on providing richer, safer experiences for customers and employees, reducing costs, and meeting organizational goals through applications like immersive training, virtual collaboration, and digital twins. - Companies face a critical choice between centralized 'Web 2' platforms, which monetize user data, and decentralized 'Web 3' models that offer users more control over their digital assets and identity. - The metaverse can improve employee onboarding, training for dangerous tasks, and collaboration, offering a greater sense of presence than traditional videoconferencing. - Key challenges include the lack of a single, interoperable metaverse (which is likely over a decade away), limited current capabilities of decentralized platforms, and the potential for negative consequences like addiction and surveillance. - Businesses are encouraged to explore potential use cases, participate in creating open standards, and consider both the immense promise and potential perils before making significant investments.
Host: Welcome to A.I.S. Insights, the podcast where we connect business leaders with the latest in academic research. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic surrounded by enormous hype and investment: the metaverse. We’ll be exploring a fascinating new study titled “Evolution of the Metaverse.” Host: This study analyzes the current state of metaverse technologies, their potential business applications, and the critical ethical considerations for IT practitioners. To help us unpack it all, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, the term 'metaverse' is everywhere, and companies are pouring billions into it. But for many business leaders, it's still a very fuzzy concept. What’s the core problem this study addresses? Expert: You've hit on it exactly. There’s a huge gap between the hype and the reality. Business leaders are struggling with a lack of clarity. They’re asking: What is the metaverse, really? How is it different from the virtual worlds of the past, like Second Life? And most importantly, what is its tangible value? Expert: This uncertainty creates real risk. Without a clear framework, it’s hard to know how to invest, or how to navigate the significant dangers the study points out, like intense user surveillance and data privacy issues. One of the researchers even described the worst-case scenario as "surveillance capitalism on steroids." Host: That’s a powerful warning. So how did the researchers approach such a broad and complex topic? Expert: Instead of a traditional lab experiment, this study is structured as a deep conversation with a team of leading academics who have been researching this space for years. They synthesized their different perspectives—from optimistic to cautious—to create a balanced view of the opportunities, risks, and the future trajectory of these technologies. Host: That’s a great approach for a topic that’s still evolving. Let's get into what they found. What did the study identify as the real business value of the metaverse today? Expert: The value isn't in some far-off sci-fi future; it's in practical applications that provide richer, safer experiences. Think of things like creating a 'digital twin' of a factory. The study mentions an auto manufacturer that did this to plan a model changeover virtually, saving massive costs by not having to shut down the physical assembly line for trial and error. Host: So it's about simulation and planning. What about for employees? Expert: Absolutely. The study highlights immersive training as a key benefit. For example, Accenture onboarded 150,000 new employees in a virtual world, creating a stronger sense of presence and connection than a standard video call. It’s also invaluable for training on dangerous tasks, like handling hazardous materials, where mistakes in a virtual setting have no real-world consequences. Host: The study also mentions a critical choice companies are facing between two different models for the metaverse. Can you break that down for us? Expert: Yes, and this is crucial. The choice is between a centralized 'Web 2' model and a decentralized 'Web 3' model. The Web 2 version, led by companies like Meta, is a closed ecosystem. The platform owner controls everything and typically monetizes user data. Expert: The Web 3 model, built on technologies like blockchain, is about user ownership. In this version, users would control their own digital identity and assets, and could move them between different virtual worlds. The challenge, as the study notes, is that these Web 3 platforms are far less developed right now. Host: Which brings us to the big question for business leaders listening: what does this all mean for them? What are the key takeaways? Expert: The first takeaway is to start exploring, but with a clear purpose. Don't build a metaverse presence just for the sake of it. Instead, identify a specific business problem that could be solved with immersive technology, like improving employee safety or reducing prototyping costs. Host: So, focus on practical use cases, not just marketing. Expert: Exactly. Second, businesses should consider participating in the creation of open standards. The study suggests that a single, interoperable metaverse is likely more than a decade away. Getting involved now gives companies a voice in shaping the future and ensuring it isn't dominated by just one or two tech giants. Expert: And finally, leaders must weigh the promise against the perils. They need to understand the governance model they’re buying into. For internal training, a centralized platform—what the study calls an "intraverse"—might be perfectly fine. But for customer-facing applications, the questions of data ownership and privacy become paramount. Host: This has been incredibly insightful, Alex. It seems the message is to approach the metaverse not as a single, flashy destination, but as a set of powerful tools that require careful, strategic implementation. Host: To summarize for our listeners: the business value of the metaverse is in specific, practical applications like immersive training and digital twins. Leaders face a critical choice between closed, company-controlled platforms and open, user-centric models. The best path forward is to explore potential use cases cautiously and participate in building an open future. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. We’ll see you next time.
Metaverse, Virtual Worlds, Augmented Reality, Web 3.0, Digital Twin, Business Strategy, Governance
MIS Quarterly Executive (2025)
How HireVue Created "Glass Box" Transparency for its AI Application
This paper presents a case study on HireVue, a company that provides an AI application for assessing job interviews. It describes the transparency-related challenges HireVue faced and explains how it addressed them by developing a "glass box" approach, which focuses on making the entire system of AI development and deployment understandable, rather than just the technical algorithm.
Problem
AI applications used for critical decisions, such as hiring, are often perceived as technical "black boxes." This lack of clarity creates significant challenges for businesses in trusting the technology, ensuring fairness, mitigating bias, and complying with regulations, which hinders the responsible adoption of AI in recruitment.
Outcome
- The study introduces a "glass box" model for AI transparency, which shifts focus from the technical algorithm to the broader sociotechnical system, including design processes, client interactions, and organizational functions. - HireVue implemented five types of transparency practices: pre-deployment client-focused, internal, post-deployment client-focused, knowledge-related, and audit-related. - This multi-faceted approach helps build trust with clients, regulators, and applicants by providing clarity on the AI's application, limitations, and validation processes. - The findings serve as a practical guide for other AI software companies on how to create effective and comprehensive transparency for their own applications, especially in high-stakes fields.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the world of artificial intelligence in a place many of us are familiar with: the job interview. With me is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: We're discussing a fascinating case study titled "How HireVue Created 'Glass Box' Transparency for its AI Application." It explores how HireVue, a company using AI to assess job interviews, tackled the challenge of transparency. Expert: Exactly. They moved beyond just trying to explain the technical algorithm and instead focused on making the entire system of AI development and deployment understandable. Host: Let's start with the big problem here. Businesses are increasingly using AI for critical decisions like hiring, but there's a huge fear of the "AI black box." What does that mean in this context? Expert: It means that for most users—recruiters, hiring managers, even executives—the AI's decision-making process is opaque. You put interview data in, a recommendation comes out, but you don't know *why*. Host: And that lack of clarity creates real business risks, right? Expert: Absolutely. The study points out major challenges. There's the issue of trust—can we rely on this technology? There's the risk of hidden bias against certain groups. And crucially, there are growing legal and regulatory hurdles, like the EU AI Act, which classifies hiring AI as "high-risk." Without transparency, companies can’t ensure fairness or prove compliance. Host: So facing this black box problem, what was HireVue's approach? How did they create what the study calls a "glass box"? Expert: The key insight was that trying to explain the complex math of a modern AI algorithm to a non-expert is a losing battle. Instead of focusing only on the technical core, they made the entire process surrounding it transparent. This is the "glass box" model. Host: So it's less about the engine itself and more about the entire car and how it's built and operated? Expert: That's a great analogy. It encompasses the design process, how they train the AI, how they interact with clients to set it up, and how they monitor its performance over time. It’s a broader, more systemic view of transparency. Host: The study highlights that this was put into practice through five specific types of transparency. Can you walk us through the key ones? Expert: Of course. The first is pre-deployment client-focused practices. Before a client even uses the system, HireVue has frank conversations about what the AI can and can’t do. For example, they explain it's best for high-volume roles, not for when you're hiring just a few people. Host: So, managing expectations from the very beginning. What comes next? Expert: Internally, they focus on meticulous documentation of the AI's design and validation. Then, post-deployment, they provide clients with outputs that are easy to interpret. Instead of a raw score like 92.5, they group candidates into three tiers—top, middle, and bottom. This helps managers make practical decisions without getting lost in tiny, meaningless score differences. Host: That sounds much more user-friendly. And the other practices? Expert: The last two are knowledge-related and audit-related. HireVue publishes its research in white papers and academic journals. And importantly, they engage independent third-party auditors to review their systems for fairness and bias. This builds huge credibility with clients and regulators. Host: This is the crucial part for our listeners, Alex. Why does this "glass box" approach matter for business leaders? What's the key takeaway? Expert: The biggest takeaway is that AI transparency is not an IT problem; it's a core business strategy. It involves multiple departments, from data science and legal to sales and customer success. Host: So it's a team sport. Expert: Precisely. This approach isn't just about compliance. It’s about building deep, lasting trust with your customers. When you can explain your system, validate its fairness, and guide clients on its proper use, you turn a black box into a trusted tool. It becomes a competitive advantage. Host: It sounds like this model could be a roadmap for any company developing or deploying high-stakes AI, not just in hiring. Expert: It is. The principles are universal. Engage clients at every step. Design interfaces that are intuitive. Be proactive about compliance. And treat transparency as an ongoing process, not a one-time fix. This builds a more ethical, robust, and defensible AI product. Host: Fantastic insights. So to summarize, the study on HireVue shows that the best way to address the AI "black box" is to build a "glass box" around it—making the entire sociotechnical system of people, processes, and validation transparent. Expert: That’s the core message. It’s about clarity, accountability, and ultimately, trust. Host: Alex, thank you for breaking that down for us. It’s a powerful lesson in responsible AI implementation. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
AI transparency, algorithmic hiring, glass box model, ethical AI, recruitment technology, HireVue, case study
MIS Quarterly Executive (2025)
How Germany Successfully Implemented Its Intergovernmental FLORA System
Julia Amend, Simon Feulner, Alexander Rieger, Tamara Roth, Gilbert Fridgen, and Tobias Guggenberger
This paper presents a case study on Germany's implementation of FLORA, a blockchain-based IT system designed to manage the intergovernmental processing of asylum seekers. It analyzes how the project navigated legal and technical challenges across different government levels. Based on the findings, the study offers three key recommendations for successfully deploying similar complex, multi-agency IT systems in the public sector.
Problem
Governments face significant challenges in digitalizing services that require cooperation across different administrative layers, such as federal and state agencies. Legal mandates often require these layers to maintain separate IT systems, which complicates data exchange and modernization. Germany's asylum procedure previously relied on manually sharing Excel-based lists between agencies, a process that was slow, error-prone, and created data privacy risks.
Outcome
- FLORA replaced inefficient Excel-based lists with a decentralized system, enabling a more efficient and secure exchange of procedural information between federal and state agencies. - The system created a 'single procedural source of truth,' which significantly improved the accuracy, completeness, and timeliness of information for case handlers. - By streamlining information exchange, FLORA reduced the time required for initial stages of the asylum procedure by up to 50%. - The blockchain-based architecture enhanced legal compliance by reducing procedural errors and providing a secure way to manage data that adheres to strict GDPR privacy requirements. - The study recommends that governments consider decentralized IT solutions to avoid the high hidden costs of centralized systems, deploy modular solutions to break down legacy architectures, and use a Software-as-a-Service (SaaS) model to lower initial adoption barriers for agencies.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating case of digital transformation in a place you might not expect: government administration. We're looking at a study titled "How Germany Successfully Implemented Its Intergovernmental FLORA System." Host: With me is our analyst, Alex Ian Sutherland. Alex, in simple terms, what is this study all about? Expert: Hi Anna. This study is a deep dive into FLORA, a blockchain-based IT system Germany built to manage the complex process of handling asylum applications. It’s a great example of how to navigate serious legal and technical hurdles when multiple, independent government agencies need to work together. Host: And this is a common struggle, right? Getting different departments, or in this case, entire levels of government, to use the same playbook. Expert: Exactly. Governments often face a big challenge: legal rules require federal and state agencies to have their own separate IT systems. This makes sharing data securely and efficiently a real nightmare. Host: So what was Germany's asylum process like before FLORA? Expert: It was surprisingly low-tech and risky. The study describes how agencies were manually filling out Excel spreadsheets and emailing them back and forth. This process was incredibly slow, full of errors, and created huge data privacy risks. Host: A classic case of digital transformation being desperately needed. How did the researchers get such an inside look at how this project was fixed? Expert: They conducted a long-term case study, following the FLORA project for six years, right from its initial concept in 2018 through its successful rollout. They interviewed nearly 100 people involved, analyzed thousands of pages of documents, and were present in project meetings. It's a very thorough look behind the curtain. Host: So after all that research, what were the big wins? How did FLORA change things? Expert: The results were dramatic. First, it replaced those insecure Excel lists with a secure, decentralized system. This meant federal and state agencies could share procedural information efficiently without giving up control of their own core systems. Host: That sounds powerful. What else did they find? Expert: The system created what the study calls a 'single procedural source of truth.' For the first time, every case handler, regardless of their agency, was looking at the same accurate, complete, and up-to-date information. Host: I can imagine that saves a lot of headaches. Did it actually make the process faster? Expert: It did. The study found that by streamlining this information exchange, FLORA reduced the time needed for the initial stages of the asylum procedure by up to 50 percent. Host: Wow, a 50 percent reduction is massive. Was there also an impact on security and compliance? Expert: Absolutely. The blockchain-based design was key here. It provided a secure, transparent log of every step, which reduced procedural errors and made it easier to comply with strict GDPR privacy laws. Host: This is a fantastic success story for the public sector. But Alex, what are the key takeaways for our business listeners? How can a company apply these lessons? Expert: There are three huge takeaways. First, when you're trying to connect siloed departments or integrate a newly acquired company, don't automatically default to building one giant, centralized system. Host: Why not? Isn't that the simplest approach? Expert: It seems simple, but the study highlights the massive 'hidden costs'—like trying to force everyone to standardize their processes or overhauling existing software. FLORA’s decentralized approach allowed different agencies to cooperate without losing their autonomy. It's a model for flexible integration. Host: That makes sense. What's the second lesson? Expert: Deploy modular solutions to break down legacy architecture. Instead of a risky 'rip and replace' project, FLORA was designed to complement existing systems. It's about adding new, flexible layers on top of the old, and gradually modernizing piece by piece. Any business with aging critical software should pay attention to this. Host: So, evolution, not revolution. And the final takeaway? Expert: Use a Software-as-a-Service, or SaaS, model to lower adoption barriers. The study explains that the federal agency initially built and hosted FLORA for the state agencies at no cost. This removed the financial and technical hurdles, getting everyone on board quickly. Once they saw the value, they were willing to share the costs later on. Host: That's a powerful strategy. So, to recap: Germany's FLORA project teaches us that for complex integration projects, businesses should consider decentralized systems to maintain flexibility, use modular solutions to tackle legacy tech, and leverage a SaaS model to drive initial adoption. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge. We'll see you next time.
intergovernmental IT systems, digital government, blockchain, public sector innovation, case study, asylum procedure, Germany
MIS Quarterly Executive (2025)
The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems
Oliver Krancher, Per Rådberg Nagbøl, Oliver Müller
This study examines the strategies employed by the Danish Business Authority (DBA), a pioneering public-sector adopter of AI, for the continuous evaluation of its AI systems. Through a case study of the DBA's practices and their custom X-RAI framework, the paper provides actionable recommendations for other organizations on how to manage AI systems responsibly after deployment.
Problem
AI systems can degrade in performance over time, a phenomenon known as model drift, leading to inaccurate or biased decisions. Many organizations lack established procedures for the ongoing monitoring and evaluation of AI systems post-deployment, creating risks of operational failures, financial losses, and non-compliance with regulations like the EU AI Act.
Outcome
- Organizations need a multi-faceted approach to AI evaluation, as single strategies like human oversight or periodic audits are insufficient on their own. - The study presents the DBA's three-stage evaluation process: pre-production planning, in-production monitoring, and formal post-implementation evaluations. - A key strategy is 'enveloping' AI systems and their evaluations, which means setting clear, pre-defined boundaries for the system's use and how it will be monitored to prevent misuse and ensure accountability. - The DBA uses an MLOps platform and an 'X-RAI' (Transparent, Explainable, Responsible, Accurate AI) framework to ensure traceability, automate deployments, and guide risk assessments. - Formal evaluations should use deliberate sampling, including random and negative cases, and 'blind' reviews (where caseworkers assess a case without seeing the AI's prediction) to mitigate human and machine bias.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. Today, we’re talking about a critical challenge for any business using artificial intelligence: how do you ensure your AI systems remain accurate and fair long after they’ve been launched? Host: We're diving into a fascinating study from MIS Quarterly Executive titled, "The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems". Host: This study examines the strategies of a true pioneer, the Danish Business Authority, and how they continuously evaluate their AI to manage it responsibly. They’ve even created a custom framework to do it. Host: Here to unpack this with me is our expert analyst, Alex Ian Sutherland. Alex, welcome to the show. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big problem here. Many businesses think that once an AI model is built and tested, the job is done. Why is that a dangerous assumption? Expert: It’s a very dangerous assumption. The study makes it clear that AI systems can degrade over time in a process called 'model drift'. The world is constantly changing, and if the AI isn't updated, its decisions can become inaccurate or even biased. Host: Can you give us a real-world example of this drift? Expert: Absolutely. The study observed an AI at the Danish Business Authority, or DBA, that was designed to recognize signatures on documents. It worked perfectly at first. But a few months later, its accuracy dropped significantly because citizens started using new digital signature technologies the AI had never seen before. Host: So the AI simply becomes outdated. What are the risks for a business when that happens? Expert: The risks are huge. We’re talking about operational failures, bad financial decisions, and failing to comply with major regulations like the EU AI Act, which specifically requires ongoing monitoring. It can lead to a total loss of trust in the technology. Host: The DBA seems to have found a solution. How did this study investigate their approach? Expert: The researchers engaged in a six-year collaboration with the DBA, doing a deep case study on their 14 operational AI systems. These systems do important work, like predicting fraud in COVID compensation claims or verifying new company registrations. Host: And out of this collaboration came a specific framework, right? Expert: Yes, a framework they co-developed called X-RAI. That’s X-R-A-I, and it stands for Transparent, Explainable, Responsible, and Accurate AI. In practice, it’s a comprehensive process that guides them from the initial risk assessment all the way through the system's entire lifecycle. Host: So what were the key findings? What can other organizations learn from the DBA’s success? Expert: The most important finding is that you need a multi-faceted approach. There is no single silver bullet. Just having a human review the AI’s output isn't nearly enough to catch all the potential problems. Host: What does a multi-faceted approach look like in practice? Expert: The DBA uses a three-stage process. First is pre-production. Before an AI system even goes live, they define very clear boundaries for what it can and can't do. They call this 'enveloping' the AI, like building a virtual fence around it to prevent misuse. Host: Enveloping. That’s a powerful visual. What comes next? Expert: The second stage is in-production monitoring. This is about continuous, daily vigilance. Caseworkers are trained to maintain a critical mindset and not just blindly accept the AI's suggestions. They hold regular team meetings to discuss complex cases and spot unusual patterns from the AI. Host: And the third stage? I imagine that's a more formal check-in. Expert: Exactly. That stage is formal evaluations. Here, they get incredibly systematic. They don’t just check the high-risk cases the AI flags. They deliberately sample random cases and even low-risk cases to find errors the AI might be missing. Expert: And a key strategy here is conducting 'blind' reviews. A caseworker assesses a case without seeing the AI’s prediction first. This is crucial for preventing human bias, because we know people are easily influenced by a machine's recommendation. Host: This is all incredibly practical. Let’s bring it home for our business listeners. What are the key takeaways for a leader trying to implement AI responsibly? Expert: I'd point to three main things. First, establish a formal governance structure for AI post-deployment. Don't let it be an afterthought. Define roles, metrics, and a clear schedule for evaluations, just as the X-RAI framework does. Host: Okay, so governance is number one. What’s second? Expert: Second is to actively build a culture of 'reflective use'. Train your teams to treat AI as a powerful but imperfect tool, not an all-knowing oracle. The DBA went as far as changing job descriptions to include skills in understanding machine learning and data. Host: That’s a serious commitment to changing the culture. And the third takeaway? Expert: The third is to invest in the right digital infrastructure. The DBA built what they call an MLOps platform with tools to automate monitoring and ensure traceability. One tool, 'Record Keeper', can track exactly which model version made a decision on a specific date. That kind of audit trail is invaluable. Host: So it's really about the intersection of a clear process, a critical culture, and the right platform. Expert: That's it exactly. Process, people, and platform, working together. Host: To summarize then: AI is not a 'set it and forget it' tool. To manage the inevitable risk of model drift, organizations need a structured, ongoing evaluation strategy. Host: As we learned from the Danish Business Authority, this means planning ahead with 'enveloping', empowering your people with continuous oversight, and running formal evaluations using smart tactics like blind reviews. Host: The lesson for every business is clear: build a governance framework, foster a critical culture, and invest in the technology to support it. Host: Alex, this has been incredibly insightful. Thank you for breaking it all down for us. Expert: It was my pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we explore the future of business and technology.
AI evaluation, AI governance, model drift, responsible AI, MLOps, public sector AI, case study
MIS Quarterly Executive (2025)
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts
Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.
Problem
Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.
Outcome
- The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact. - It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity. - The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders. - It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating study titled “How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts.”
Host: In simple terms, it explores the huge challenges of getting AI right in complex situations, like humanitarian crises, where developers, aid agencies, and the people they serve can have very different ideas about what "responsible AI" even means. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, most of our listeners think about AI safety in terms of technical issues—like an AI making something up or having biased data. But this study suggests that’s only half the battle. What’s the bigger problem they identified?
Expert: Exactly. The study argues that focusing only on those technical, objective risks is dangerously insufficient, especially in high-stakes environments. The real, hidden problem is the subjective disagreements between different groups of people.
Expert: Think about an AI tool designed to predict food shortages. The developers in California see it as a technical challenge of data and accuracy. The aid agency executive sees a tool for efficient resource allocation. But the local aid worker on the ground might worry it dehumanizes their work, and the vulnerable population might fear how their data is being used.
Expert: These fundamental disagreements on purpose, values, and impact are what the study calls “AI Responsibility Rifts.” And these rifts can completely derail an AI project, leading to it being rejected or even causing unintended harm.
Host: So how did the researchers uncover these rifts? It sounds like something that would be hard to measure.
Expert: They went right into the heart of a real-world, data-sensitive context: the ongoing humanitarian crisis in Gaza. They didn't just run a survey; they conducted in-depth interviews across six different AI tools being deployed there. They spoke to everyone involved—from the AI developers and executives to the humanitarian analysts and end-users on the front lines.
Host: And that real-world pressure cooker revealed some major findings. What was the biggest takeaway?
Expert: The biggest takeaway is the concept of these AI Responsibility Rifts, or AIRRs. They found these rifts consistently appear in five key areas, which they've organized into a framework called SHARE.
Host: SHARE? Can you break that down for us?
Expert: Of course. SHARE stands for Safety, Humanity, Accountability, Reliability, and Equity. For each one, different stakeholders had wildly different views.
Expert: Take Safety. Developers focused on technical safeguards. But refugee stakeholders were asking, "Why do you need so much of our personal data? Is continuing to consent to its use truly safe for us?" That's a huge rift.
Host: And what about Humanity? That’s not a word you often hear in AI discussions.
Expert: Right. They found one AI tool was updated to automate a task that humanitarian analysts used to do. It worked "too well." It was efficient, but the analysts felt it devalued their expertise and eroded the crucial human-to-human relationships that are the bedrock of effective aid.
Host: So it's a conflict between efficiency and the human element. What about Accountability?
Expert: This was a big one. When an AI-assisted decision leads to a bad outcome, who is to blame? The developers? The manager who bought the tool? The person who used it? The study found there was no consensus, creating a "blame game" that erodes trust.
Host: That brings us to Reliability and Equity.
Expert: For Reliability, some field agents found an AI prediction tool was only reliable for very specific tasks, while executives saw its reports as impartial, objective truth. And for Equity, the biggest question was whether the AI was fixing old inequalities or creating new ones—for instance, by portraying certain nations in a negative light based on biased training data.
Host: Alex, this is crucial. Our listeners might not be in humanitarian aid, but they are deploying AI in their own complex businesses. What is the key lesson for them?
Expert: The lesson is that these rifts can happen anywhere. Whether you're rolling out an AI for hiring, for customer service, or for supply chain management, you have multiple stakeholders: your tech team, your HR department, your employees, and your customers. They will all have different values and expectations.
Host: So what can a business leader practically do to avoid these problems?
Expert: The study provides a powerful tool: the SHARE framework itself. It’s designed as a self-diagnostic questionnaire. A company can use it to proactively ask the right questions to all its stakeholders *before* a full-scale AI deployment.
Expert: By using the SHARE framework, you can surface these disagreements early. You can identify fears about job replacement, concerns about data privacy, or confusion over accountability. Addressing these human rifts head-on is the difference between an AI tool that gets adopted and creates value, and one that causes internal conflict and ultimately fails.
Host: So it’s about shifting from a purely technical risk mindset to a more holistic, human-centered one.
Expert: Precisely. It’s about building a shared understanding of what "responsible" means for your specific context. That’s how you make AI work not just in theory, but in practice.
Host: To sum up for our listeners: When implementing AI, look beyond the code. Search for the human rifts in expectations and values across five key areas: Safety, Humanity, Accountability, Reliability, and Equity. Using a framework like SHARE can help you bridge those gaps and ensure your AI initiatives succeed.
Host: Alex Ian Sutherland, thank you for making this complex study so accessible and actionable.
Expert: My pleasure, Anna.
Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework
MIS Quarterly Executive (2025)
How to Operationalize Responsible Use of Artificial Intelligence
Lorenn P. Ruster, Katherine A. Daniell
This study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices. Based on participatory action research with two startups, the paper provides a roadmap for crafting specific responsibility pledges and embedding them into organizational processes, moving beyond abstract ethical statements.
Problem
Many organizations are committed to the responsible use of AI but struggle with how to implement it practically, creating a significant "principle-to-practice gap". This confusion can lead to inaction or superficial efforts known as "ethics-washing," where companies appear ethical without making substantive changes. The study addresses the lack of clear, actionable guidance for businesses, especially smaller ones, on where to begin.
Outcome
- Presents a five-phase process for operationalizing responsible AI: 1) Buy-in, 2) Intuition-building, 3) Pledge-crafting, 4) Pledge-communicating, and 5) Pledge-embedding. - Argues that responsible AI should be approached as a systems problem, considering organizational mindsets, culture, and processes, not just technical fixes. - Recommends that organizations create contextualized, action-oriented "pledges" rather than simply adopting generic AI principles. - Finds that investing in responsible AI practices early, even in small projects, helps build organizational capability and transfers to future endeavors. - Provides a framework for businesses to navigate communication challenges, balancing transparency with commercial interests to build user trust.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a study that offers a lifeline to any business navigating the complex world of ethical AI. It’s titled, "How to Operationalize Responsible Use of Artificial Intelligence."
Host: The study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices, moving beyond just abstract ethical statements. With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: So, Alex, let’s start with the big picture. Why do businesses need a study like this? What’s the core problem it’s trying to solve?
Expert: The core problem is something researchers call the "principle-to-practice gap." Nearly every company today says they’re committed to the responsible use of AI. But when it comes to actually implementing it, they struggle. There’s a lot of confusion about where to even begin.
Host: And what happens when companies get stuck in that gap?
Expert: It leads to two negative outcomes. Either they do nothing, paralyzed by the complexity, or they engage in what's called "ethics-washing"—where they publish a list of high-level principles on their website but don't make any substantive changes to their products or processes. This study provides a clear roadmap to avoid those traps.
Host: A roadmap sounds incredibly useful. How did the researchers develop it? What was their approach?
Expert: Instead of just theorizing, they got their hands dirty. They used a method called participatory action research, where they worked directly with two early-stage startups over several years. By embedding with these small, resource-poor companies, they could identify a process that was practical, adaptable, and worked in a real-world business environment, not just in a lab.
Host: I like that it's grounded in reality. So, what did this process, this roadmap, actually look like? What were the key findings?
Expert: The study distills the journey into a clear five-phase process. It starts with Phase 1: Buy-in, followed by Intuition-building, Pledge-crafting, Pledge-communicating, and finally, Pledge-embedding.
Host: "Pledge-crafting" stands out. How is a pledge different from a principle?
Expert: That's one of the most powerful insights of the study. Principles are often generic, like "we believe in fairness." A pledge is a contextualized, action-oriented promise. For example, instead of just saying they value privacy, a company might pledge to minimize data collection, and then define exactly what that means for their specific product. It forces a company to translate a vague value into a concrete commitment.
Host: It makes the idea tangible. So, this brings us to the most important question for our listeners. Why does this matter for business? What are the key takeaways for a leader who wants to put responsible AI into practice today?
Expert: I’d boil it down to three key takeaways. First, approach responsible AI as a systems problem, not a technical problem. It’s not just about code; it's about your organizational mindset, your culture, and your processes.
Host: Okay, a holistic view. What’s the second takeaway?
Expert: The study emphasizes that the first step must be a mindset shift. Leaders and their teams have to move from seeing themselves as neutral actors to accepting their role as active shapers of technology and its impact on society. Without that genuine buy-in, any effort is at risk of becoming ethics-washing.
Host: And the third?
Expert: Build what the study calls "responsibility muscles." They found that by starting this five-phase process, even on small, early-stage projects, organizations build a capability for responsible innovation. That muscle memory then transfers to larger and more complex projects in the future. You don't have to solve everything at once; you just have to start.
Host: A fantastic summary. So, the message is: view it as a systems problem, cultivate the mindset of an active shaper, and start building those responsibility muscles by crafting specific pledges, not just principles.
Expert: Exactly. It provides a way to start moving, meaningfully and authentically.
Host: This has been incredibly insightful. Thank you, Alex Ian Sutherland, for making this complex topic so accessible. And thank you to our listeners for joining us on A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
Responsible AI, AI Ethics, Operationalization, Systems Thinking, AI Governance, Pledge-making, Startups
MIS Quarterly Executive (2025)
Transforming Energy Management with an AI-Enabled Digital Twin
Hadi Ghanbari, Petter Nissinen
This paper reports on a case study of how one of Europe's largest district heating providers, called EnergyCo, implemented an AI-assisted digital twin to improve energy efficiency and sustainability. The study details the implementation process and its outcomes, providing six key recommendations for executives in other industries who are considering adopting digital twin technology.
Problem
Large-scale energy providers face significant challenges in managing complex district heating networks due to fluctuating energy prices, the shift to decentralized renewable energy sources, and operational inefficiencies from siloed departments. Traditional control systems lack the comprehensive, real-time view needed to optimize the entire network, leading to energy loss, higher costs, and difficulties in achieving sustainability goals.
Outcome
- The AI-enabled digital twin provided a comprehensive, real-time representation of the entire district heating network, replacing fragmented views from legacy systems. - It enabled advanced simulation and optimization, allowing the company to improve operational efficiency, manage fluctuating energy prices, and move toward its carbon neutrality goals. - The system facilitated scenario-based decision-making, helping operators forecast demand, optimize temperatures and pressures, and reduce heat loss. - The digital twin enhanced cross-departmental collaboration by providing a shared, holistic view of the network's operations. - It enabled a shift from reactive to proactive maintenance by using predictive insights to identify potential equipment failures before they occur, reducing costs and downtime.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating case study called "Transforming Energy Management with an AI-Enabled Digital Twin." It details how one of Europe's largest energy providers used this cutting-edge technology to completely overhaul its operations for better efficiency and sustainability. With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. Why would a massive energy company need a technology like an AI-enabled digital twin? What problem were they trying to solve?
Expert: Well, a company like EnergyCo, as it's called in the study, manages an incredibly complex district heating network. We're talking about over 2,800 kilometers of pipes. Their traditional control systems just couldn't keep up.
Host: What was making it so difficult?
Expert: It was a perfect storm of challenges. First, you have volatile energy prices. Second, they're shifting from a few big fossil-fuel plants to many smaller, decentralized renewable sources, which are less predictable. And internally, their departments were siloed. The production team, the network team, and the customer team all had different data and different priorities, leading to significant energy loss and higher costs.
Host: It sounds like they were flying with a dozen different dashboards but no single view of the cockpit. So what was the approach they took? What exactly is a digital twin?
Expert: In simple terms, a digital twin is a dynamic, virtual replica of a physical system. The key thing that distinguishes it from a simple digital model is that the data flow is automatic and two-way. It doesn't just receive real-time data from the physical network; it can be used to simulate changes and even send instructions back to optimize it.
Host: So it’s a living model, not a static blueprint. How did the study find this approach worked in practice for EnergyCo? What were the key outcomes?
Expert: The results were transformative. The first major finding was that the digital twin provided a single, comprehensive, real-time representation of the entire network. For the first time, everyone was looking at the same holistic picture.
Host: And what did that unified view enable them to do?
Expert: It unlocked advanced simulation and optimization. Operators could now run "what-if" scenarios. For example, they could accurately forecast demand based on weather data and then simulate the most cost-effective way to generate and distribute heat, drastically reducing energy loss and managing those fluctuating fuel prices.
Host: The study also mentions collaboration. How did it help there?
Expert: By breaking down the data silos, it naturally improved cross-departmental collaboration. When the production team could see how their decisions impacted network pressure miles away, they could make smarter, more coordinated choices. It created a shared operational language.
Host: That makes sense. And I was particularly interested in the shift from reactive to proactive maintenance.
Expert: Absolutely. Instead of waiting for a critical failure, the AI within the twin could analyze data to predict which components were under stress or likely to fail. This allowed EnergyCo to schedule maintenance proactively, which is far cheaper and less disruptive than emergency repairs.
Host: Alex, this is clearly a game-changer for the energy sector. But what’s the key takeaway for our listeners—the business leaders in manufacturing, logistics, or even retail? Why does this matter to them?
Expert: The most crucial lesson is about global versus local optimization. So many businesses try to improve one department at a time, but that can create bottlenecks elsewhere. A digital twin gives you a holistic view of your entire value chain, allowing you to make decisions that are best for the whole system, not just one part of it.
Host: So it’s a tool for breaking down those internal silos we see everywhere.
Expert: Exactly. The second key takeaway is that the human element is vital. The study shows that EnergyCo didn't just deploy the tech and replace people. They positioned it as a tool to support their operators, building trust and involving them in the process. Automation was gradual, which is critical for buy-in.
Host: That’s a powerful point about managing technological change. Any final takeaway for our audience?
Expert: Yes, the study highlights how this technology can become a foundation for new business models. EnergyCo is now exploring how to use the digital twin to give customers real-time data, turning them from passive consumers into active participants in energy management. For any business, this shows that operational tools can unlock future strategic growth.
Host: So, to summarize: an AI-enabled digital twin offers a holistic, real-time view of your operations, it breaks down silos to enable smarter decisions, and it can even pave the way for future innovation. It's about augmenting your people, not just automating processes.
Host: Alex Ian Sutherland, thank you so much for these brilliant insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more actionable intelligence from the world of research.
Digital Twin, Energy Management, District Heating, AI, Cyber-Physical Systems, Sustainability, Case Study
MIS Quarterly Executive (2024)
How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making
Philipp Staudt, Rainer Hoffmann
This paper presents a case study of a large German utility company's successful transition to a data-driven organization. It outlines the strategy, which involved three core transformations: enabling the workforce, improving the data lifecycle, and implementing employee-centered data management. The study provides actionable recommendations for industrial organizations facing similar challenges.
Problem
Many industrial companies, particularly in the utility sector, struggle to extract value from their data. The ongoing energy transition, with the rise of renewable energy sources and electric vehicles, has made traditional, heuristic-based decision-making obsolete, creating an urgent need for a robust corporate data culture to manage increasing complexity and ensure grid stability.
Outcome
- A data culture was successfully established through three intertwined transformations: enabling the workforce, improving the data lifecycle, and transitioning to employee-centered data management. - Enabling the workforce involved upskilling programs ('Data and AI Multipliers'), creating platforms for knowledge sharing, and clear communication to ensure widespread buy-in and engagement. - The data lifecycle was improved by establishing new data infrastructure for real-time data, creating a central data lake, and implementing a strong data governance framework with new roles like 'data officers' and 'data stewards'. - An employee-centric approach, featuring cross-functional teams, showcasing quick wins to demonstrate value, and transparent communication, was crucial for overcoming resistance and building trust. - The transformation resulted in the deployment of over 50 data-driven solutions that replaced outdated processes and improved decision-making in real-time operations, maintenance, and long-term planning.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we turn academic research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study titled, "How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making." Host: It explores how a large German utility company transformed itself into a data-driven organization. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Most companies know data is important, but this study focuses on a utility company. What was the specific problem they were trying to solve? Expert: It’s a problem many traditional industries are facing, but it's especially acute in the energy sector. They’re dealing with a massive shift—the rise of renewable energy like wind and solar, and the explosion in electric vehicle charging. Host: So the old ways of working just weren't cutting it anymore? Expert: Exactly. For decades, they relied on experience and simple tools. The study gives a great example of a "drag pointer"—basically a needle on a gauge that only showed the highest energy load a substation ever experienced. It didn't tell you when it happened, or why. Host: A single data point, with no context. Expert: Precisely. And that was fine when the grid was predictable. But suddenly, they went from handling a dozen requests for new EV chargers a month to nearly three thousand. The old "rule-of-thumb" approach became obsolete and even risky for grid stability. They were flying blind. Host: So how did the researchers get inside this transformation to understand how the company fixed this? Expert: They conducted a deep-dive case study, interviewing seven of the company’s key domain experts. These were the people on the front lines—the ones directly involved in building the new data strategy. This gave them a real ground-truth perspective on what actually worked. Host: So what were the key findings? What was the secret to their success? Expert: The study breaks it down into three core transformations that were all linked together. The first, and perhaps most important, was enabling the workforce. Host: This wasn't just about hiring a team of data scientists, then? Expert: Not at all. They created a program to train existing employees to become "Data and AI Multipliers." These were people from various departments who became data champions, identifying opportunities and helping their colleagues use new tools. It was about upskilling from within. Host: Building capability across the organization. What was the second transformation? Expert: Improving the data lifecycle. This sounds technical, but it’s really about fixing the plumbing. They moved from scattered, siloed databases to a central data lake, creating a single source of truth that everyone could access. Host: And I see they also created new roles like 'data officers' and 'data stewards'. Expert: Yes, and this is crucial. It made data quality a formal part of people's jobs. Instead of data being an abstract IT issue, specific people became accountable for its accuracy and maintenance within their business units. Host: That makes sense. But change is hard. How did they get everyone to embrace this new way of working? Expert: That brings us to the third piece: an employee-centered approach. They knew they couldn't just mandate this from the top down. They formed cross-functional teams, bringing engineers and data specialists together to solve real problems. Host: And they made a point of showcasing quick wins, right? Expert: Absolutely. This was key to building momentum. For example, they automated a critical report that used to take two employees a full month to compile, three times a year. Suddenly, that data was available in real-time. When people see that kind of tangible benefit, it overcomes resistance and builds trust in the process. Host: This is all fascinating for a utility company, but what's the key takeaway for a business leader in, say, manufacturing or retail? Why does this matter to them? Expert: The lessons are completely universal. First, you can't just buy technology; you have to invest in your people. The "Data Multiplier" model of empowering internal champions can work in any industry. Host: So, people first. What else? Expert: Second, make data quality an explicit responsibility. Creating roles like data stewards ensures accountability and treats data as the critical business asset it is. It stops being everyone's problem and no one's priority. Host: And the third lesson? Expert: Start small and demonstrate value fast. Don't try to boil the ocean. Find a painful, manual process, fix it with a data-driven solution, and then celebrate that "quick win." That success story becomes your best marketing tool for driving wider adoption. Ultimately, this company deployed over 50 new data solutions that transformed their operations. Host: A powerful example of real-world impact. So, to recap: the challenges of the energy transition forced this company to ditch its old methods. Their success came from a three-part strategy: empowering their workforce, rebuilding their data infrastructure, and using an employee-centric approach focused on quick wins. Host: Alex, thank you so much for breaking that down for us. It’s a brilliant roadmap for any company looking to build a true data culture. Expert: My pleasure, Anna. Host: And thank you to our listeners for joining us on A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
data culture, data-driven decision making, utility company, energy transition, change management, data governance, case study
MIS Quarterly Executive (2024)
The Hidden Causes of Digital Investment Failures
Joe Peppard, R. M. Bastien
This study analyzes hundreds of digital projects to uncover the subtle, hidden root causes behind their frequent failure or underachievement. It moves beyond commonly cited symptoms, like budget overruns, to identify five fundamental organizational and structural issues that prevent companies from realizing value from their technology investments. The analysis is supported by an illustrative case study of a major insurance company's large-scale transformation program.
Problem
Organizations invest heavily in digital technology expecting significant returns, but most struggle to achieve their goals, and project success rates have not improved over time. Despite an abundance of project management frameworks and best practices, companies often address the symptoms of failure rather than the underlying problems. This research addresses the gap by identifying the deep-rooted, often surprising causes for these persistent investment failures.
Outcome
- The Illusion of Control: Business leaders believe they are controlling projects through metrics and governance, but this is an illusion that masks a lack of real influence over value creation. - The Fallacy of the “Working System”: The primary goal becomes delivering a functional IT system on time and on budget, rather than achieving the intended business performance improvements. - Conflicts of Interest: The conventional model of a single, centralized IT department creates inherent conflicts of interest, as the same group is responsible for designing, building, and quality-assuring systems. - The IT Amnesia Syndrome: A project-by-project focus leads to a collective organizational memory loss about why and how systems were built, creating massive complexity and technical debt for future projects. - Managing Expenses, Not Assets: Digital systems are treated as short-term expenses to be managed rather than long-term productive assets whose value must be cultivated over their entire lifecycle.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we’re tackling a multi-billion-dollar question: why do so many major digital and technology projects fail to deliver on their promise? Host: We’re diving into a fascinating new study called "The Hidden Causes of Digital Investment Failures". It analyzes hundreds of projects to uncover the subtle, often invisible root causes behind these failures, moving beyond the usual excuses like budget overruns or missed deadlines. Host: To help us unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big problem. Companies are pouring huge amounts of money into digital transformation, but the success rates just aren't improving. What's going on? Expert: It’s a huge issue. The study uses a great analogy: it’s like treating sciatica. You feel the pain in your leg, so you stretch the muscle. That gives temporary relief, but the root cause is a problem in your lower back. In business, we see symptoms like budget overruns and we react by adding more governance or new project management tools. We’re treating the leg, not the back. Expert: The study highlights a case of a major insurance company. They spent over $120 million and six years on a new platform, only to find they were less than a third of the way done, with the final cost estimate having nearly doubled. They were doing all the "right" project management things, but it was still failing. Host: So they were addressing the symptoms, not the true cause. How did the researchers in this study get to those root causes? What was their approach? Expert: They conducted a deep root-cause analysis. Think of it as business archaeology. They didn't just look at the surface of failed projects; they analyzed hundreds of them to map the complex cause-and-effect relationships that led to poor outcomes. They then workshopped these findings with senior practitioners to ensure they reflected real-world experience. Host: And this "archaeology" uncovered five key hidden causes. The first one is called 'The Illusion of Control'. It sounds a bit ominous. Expert: It is, in a way. Business leaders believe they're in control because they have dashboards, metrics, and steering committees tracking time and cost. But the study found this is an illusion. They are controlling the execution of the project, but they have no real influence over the creation of business value. Expert: In that insurance case, the executives saw progress reports, but over 95% of the budget was being spent by technical teams making hundreds of small, invisible decisions every week that ultimately determined the project's fate. The business leaders were too far removed to have any real control over the outcome. Host: Which sounds like it leads directly to the second finding: 'The Fallacy of the Working System'. What does that mean? Expert: It means the goalpost shifts. The original objective was to improve business performance, but the project's primary goal becomes just delivering a functional IT system on time and on budget. Everyone from the project manager to the CIO is incentivized to just get a "working system" out the door. Host: So, the 'working system' becomes the end goal, not the business value it was supposed to create. Expert: Exactly. And there's often no one held accountable for delivering that value after the project team declares victory and disbands. Host: The third cause is 'Conflicts of Interest'. This sounds like a structural problem. Expert: It's a huge one. The study points out that in mature industries like construction, you have separate roles: the customer funds it, the architect designs it, and the builder constructs it. They have separate accountabilities. But in the typical corporate structure, a single IT department does all three. They design, build, and quality-check their own work. Host: So when a trade-off has to be made between long-term quality and the short-term deadline... Expert: The deadline and budget almost always win. It creates a system that prioritizes short-term delivery over building resilient, high-quality digital assets. Host: And I imagine that short-term focus creates long-term problems, which might be what the fourth cause, 'The IT Amnesia Syndrome', is about. Expert: Precisely. Because the focus is on finishing the current project, things like proper documentation are the first to be cut. As teams move on and people leave, the organization forgets why systems were built a certain way. The study found this creates massive, unnecessary complexity. Future projects are then bogged down by trying to understand these poorly documented legacy systems. Host: It sounds like building on a shaky foundation you can't even see properly. Expert: A perfect description. Host: And the final hidden cause: 'Managing Expenses, Not Assets'. Expert: Right. A company would never treat a new factory or a fleet of cargo ships as a simple expense. They are managed as productive assets over their entire lifecycle. But digital systems, which can cost hundreds of millions, are often treated as short-term project expenses. There's no focus on their long-term value, maintenance costs, or when they should be retired. Host: So Alex, this is a pretty powerful diagnosis of what’s going wrong. The crucial question for our listeners is: what's the cure? What do leaders need to do differently? Expert: The study offers some clear, if challenging, recommendations. First, business leaders must truly *own* their digital systems as productive assets. The business unit that gets the value should be the owner, not the IT department. Expert: Second, organizations need to eliminate those conflicts of interest by separating the roles of architecting, building, and quality assurance. You need independent checks and balances. Expert: And finally, the mindset has to shift from securing funding to delivering value. One CEO the study mentions now calls project sponsors back before the investment committee years after a project is finished to prove the business benefits were actually achieved. That creates real accountability. Host: So it’s not about finding a better project methodology, but about fundamentally changing organizational structure and, most importantly, the mindset of leadership. Expert: That's the core message. The success or failure of a digital investment is determined long before the project itself ever kicks off. It's determined by the organizational system it operates in. Host: A fascinating and crucial insight. We’ve been discussing the study "The Hidden Causes of Digital Investment Failures". The five hidden causes are: The Illusion of Control, The Fallacy of the Working System, Conflicts of Interest, IT Amnesia Syndrome, and Managing Expenses, Not Assets. Host: Alex Ian Sutherland, thank you for making this so clear for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we decode the research that’s reshaping the world of business.
digital investment, project failure, IT governance, root cause analysis, business value, single-counter IT model, technical debt
MIS Quarterly Executive (2024)
Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation
Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.
Problem
Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.
Outcome
- Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences. - Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce. - Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity. - Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged. - Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once. - Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
Host: Welcome to A.I.S. Insights, the podcast where we connect big ideas with business practice. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating study from the MIS Quarterly Executive titled, "Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation".
Host: It explores how abstract ethical ideas about AI can be turned into concrete actions when a company rolls out new technology. It follows a German energy provider over 30 months as they implemented large-scale automation, providing a real-world roadmap for leaders.
Host: With me is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Many business leaders listening have heard about AI ethics, but the study suggests there's a major disconnect. What's the core problem they identified?
Expert: The problem is a classic gap between knowing *what* to do and knowing *how* to do it. Companies have access to high-level principles like fairness, transparency, and responsibility. But when it's time to automate a department's workflow, managers are often left wondering, "What does 'fairness' actually look like on a Tuesday morning for my team?"
Expert: This uncertainty creates fear and resistance among employees. They worry about their jobs, their routines get disrupted, and they often see AI as a threat. The study looked at a company, called ESP, that was facing this exact dilemma.
Host: So how did the researchers get inside this problem to understand it?
Expert: They used a longitudinal case study approach. For two and a half years, they were deeply embedded in the company. They conducted interviews, surveys, and on-site observations with everyone involved—from the back-office employees whose tasks were being automated, to the project managers, and even senior leaders and the employee works council.
Host: That deep-dive approach must have surfaced some powerful findings. What were the key takeaways?
Expert: Absolutely. The first was about responsibility. It can't be an abstract concept. At ESP, when the IT helpdesk was asked to create a user account for a bot, they initially refused, asking who would be personally responsible if it made a mistake.
Host: That's a very practical roadblock. How did the company solve it?
Expert: They had to define clear roles, creating a "bot supervisor" who was accountable for the bot's daily operations. But more importantly, they established that senior leadership, not just the tech team, had to accept ultimate responsibility for any negative outcomes.
Host: That makes sense. The study also mentions transparency. How do you make something like a software bot, which is essentially invisible, transparent to a nervous workforce?
Expert: This is one of my favorite findings. ESP set up a dedicated workstation in the middle of the office where anyone could walk by and watch the bot perform its tasks on screen. To prevent people from accidentally turning it off, they put a giant teddy bear in the chair, which they named "Robbie".
Host: A teddy bear?
Expert: Exactly. It was a simple, humanizing touch. It made the technology feel less like a mysterious, threatening force and more like a tool. It literally turned employee fear into curiosity.
Host: So it's about demystifying the technology. What about helping employees cope with the changes to their actual jobs?
Expert: The key was gradual involvement and open communication. Instead of top-down corporate announcements, they found that peer-to-peer conversations were far more effective. They created safe spaces where employees could talk to trusted colleagues who had already worked with the bots, ask honest questions, and voice their concerns without being judged.
Host: It sounds like the human element was central to this technology rollout. Alex, let’s get to the bottom line. For the business leaders listening, why does all of this matter? What are the key takeaways for them?
Expert: I think there are three critical takeaways. First, AI ethics is not a theoretical exercise; it's a core part of project risk management. Ignoring employee concerns doesn't make them go away—it just leads to resistance and potential project failure.
Expert: Second, make the invisible visible. Whether it's a teddy bear on a chair or a live dashboard, find creative ways to show employees what the AI is actually doing. A little transparency goes a long way in building trust.
Expert: And finally, involve your stakeholders from day one. That means bringing your employee representatives, your data protection officers, and your legal teams into the conversation early. In the study, the data protection officer stopped a "task mining" initiative due to privacy concerns, saving the company time and resources on a project that was a non-starter.
Host: So, it's about being proactive with responsibility, transparency, and communication.
Expert: Precisely. It’s about treating the implementation not just as a technical challenge, but as a human one.
Host: A fantastic summary of a very practical study. The message is clear: to succeed with AI automation, you have to translate ethical principles into thoughtful, tangible actions that build trust with your people.
Host: Alex Ian Sutherland, thank you for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable lessons from the intersection of business and technology.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines
MIS Quarterly Executive (2024)
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations
Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.
Problem
A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.
Outcome
- Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management. - These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'. - The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions. - Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations." Host: It explores how employees at a warehouse in Hong Kong used everyday tools, like Microsoft Excel, to create unofficial but essential solutions when their official corporate software fell short. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What was the real-world problem this study looked into? Expert: It’s a classic story of a global headquarters rolling out a one-size-fits-all solution. The company, called CoreRidge in the study, implemented a standardized corporate software, Microsoft Dynamics. Expert: The problem was, this system was completely non-customizable. It worked fine in most places, but it was a disaster for their Hong Kong operations. Host: A disaster how? What was so unique about Hong Kong? Expert: In Hong Kong, due to the high cost of real estate, the company has small retail stores and one large, central warehouse. The corporate software was designed for locations where the warehouse and store are together. Expert: It simply couldn't handle the complex delivery scheduling needed to get products from that single warehouse to all the different stores and customers. Core tasks were impossible to perform with the official system. Host: So employees were stuck. How did the researchers figure out what was happening? Expert: They went right to the source. It was a qualitative case study where they conducted in-depth interviews with 31 employees at the warehouse, from trainees all the way up to senior management. This gave them a ground-level view of how the team was actually getting work done. Host: And that brings us to the findings. What did they discover? Expert: They found that employees had essentially turned Microsoft Excel into their own low-code development tool. They were downloading data from the official system and using Excel to manage everything from delivery lists to rescheduling shipments during a typhoon. Host: So they built their own system, in a way. Expert: Exactly. And this wasn't a secret, rogue operation. These Excel workarounds became standard operating procedure. They were noncompliant with corporate IT policy, but they were absolutely vital for daily operations and customer satisfaction. The study calls this 'shadow IT', but frames it as a valuable, employee-driven innovation. Host: That’s a really interesting perspective. It sounds like the company should be celebrating these employees, not punishing them. Expert: That’s the core argument. The study suggests that this kind of informal, tool-based problem-solving is a legitimate form of low-code development. It’s not always about using a fancy, dedicated platform. Sometimes the best tool is the one your team already knows how to use. Host: This is the crucial part for our listeners. What are the key business takeaways here? Why does this matter? Expert: It matters immensely. First, it shows that managers need to recognize and support these informal solutions, not just shut them down. These workarounds are a goldmine of information about what's not working in your official systems. Host: So, don't fight 'shadow IT', but try to understand it? Expert: Precisely. The second major takeaway is that businesses should adopt a "portfolio approach" to low-code development. Don't just invest in one big platform. Empower your employees by recognizing the value of flexible, everyday tools like Excel. Expert: It’s about creating a governance structure that can embrace these informal solutions, manage their risks, and learn from them to make the whole organization smarter and more agile. Host: It sounds like a shift from rigid, top-down control to a more flexible, collaborative approach to technology. Expert: That's it exactly. It's about trusting your employees on the front lines to solve the problems they face every day, with the tools they have at hand. Host: So, to summarize: a rigid corporate system can fail to meet local needs, but resourceful employees can bridge the gap using everyday tools like Excel. And the big lesson for businesses is to recognize, govern, and learn from these informal innovations rather than just trying to eliminate them. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world, powered by Living Knowledge.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
AI & SOCIETY (2025)
What it takes to control Al by design: human learning
Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.
Problem
Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.
Outcome
- The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive. - Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'. - A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode. - This approach results in high-performance AI systems that are unbiased and aligned with human values. - The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we’re diving into a critical question for any organization using artificial intelligence: How do we actually stay in control? We'll be discussing a fascinating study titled, "What it takes to control AI by design: human learning." Host: It proposes a new framework for maintaining meaningful human control over complex AI systems, emphasizing that for AI to learn, humans must learn right alongside it. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. It’s a crucial topic. Host: Absolutely. So, Alex, let's start with the big picture. What is the real-world problem this study is trying to solve? Expert: The problem is that AI is evolving much faster than our methods for managing it. Think about critical systems in finance, cybersecurity, or logistics. We use AI to make high-stakes decisions at incredible speed. Expert: But our traditional methods of oversight, where a person just checks the final output, are no longer enough. As the study points out, AI can alter its behavior or generate unexpected results when it encounters new situations, creating a huge risk that it no longer aligns with our original goals. Host: So there's a growing gap between the AI's capability and our ability to control it. How did the researchers approach this challenge? Expert: They took a step back and used systems theory. Instead of seeing the human and the AI as separate, they designed a single, integrated system that operates in two distinct modes. Expert: First, there's the 'stable mode'. This is when the AI is working efficiently on its own, handling routine tasks based on what it already knows. Think of it as the AI on a well-defined autopilot. Expert: But when the environment changes or the AI's confidence drops, the system shifts into an 'adaptive mode'. This is a collaborative learning session, where the human expert and the AI work together to make sense of the new situation. Host: That’s a really clear way to put it. What were the main findings that came out of this two-mode approach? Expert: The first key finding is that this dual-mode structure is essential. You get the efficiency of automation in the stable mode, but you have a built-in, structured way to adapt and learn when faced with uncertainty. Host: And I imagine the human is central to that adaptive mode. Expert: Exactly. And that’s the second major finding: for this to work, human learning must keep pace with machine learning. To stay in control, the human expert can't be a passive observer. They must be actively learning and updating their own understanding of the environment. Host: That turns the typical human-in-the-loop idea on its head a bit. Expert: It does. Which leads to the third and most interesting finding, a method they call 'reciprocal human-machine learning'. In the adaptive mode, it’s not just the human teaching the machine. The AI provides specific feedback to the human expert, pointing out patterns or inconsistencies they might have missed. Expert: So, the human and the AI are actively learning from each other. This reciprocal feedback loop ensures the entire system gets smarter, performs better, and stays aligned with human values, preventing things like algorithmic bias from creeping in. Host: A true partnership. This is where it gets really interesting for our listeners. Alex, why does this matter for business? What are the practical takeaways? Expert: This framework is a roadmap for de-risking advanced AI applications. For any business using AI in critical functions, this is a way to ensure safety, accountability, and alignment with company ethics. It's about moving from a "black box" to a controllable, transparent system. Expert: Second, it's about building institutional knowledge. By keeping humans actively engaged in the learning process, you're not just improving the AI; you're upskilling your employees. They develop a deeper expertise that makes your entire operation more resilient and adaptable. Expert: And finally, that adaptability is a huge competitive advantage. A business with a human-AI system that can learn and respond to market shifts, new cyber threats, or supply chain disruptions will outperform one with a rigid, static AI every time. Host: So to recap: traditional AI oversight is failing. This study presents a powerful framework where a human-AI system operates in a stable mode for efficiency and an adaptive mode for learning. Host: The key is that this learning must be reciprocal—a two-way street where both human and machine get smarter together, ensuring the AI remains a powerful, controllable, and trusted tool for the business. Host: Alex, thank you so much for these valuable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world.