This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.
Problem
Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.
Outcome
- Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias. - Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process. - Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation. - Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I'm your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating study titled "IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer". It analyzes one of the most high-profile corporate AI ventures in recent memory.
Host: This analysis explores the strategic dilemma IBM faced with Watson Health, its ambitious initiative to use AI for cancer detection and treatment. The core question: should IBM double down on this specialized healthcare focus, or pivot to a more versatile, cross-industry AI platform?
Host: With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Glad to be here, Anna.
Host: So, Alex, IBM's Watson became famous for winning on the game show Jeopardy. The move into healthcare seemed like a noble and brilliant next step. What was the big problem they were trying to solve?
Expert: It was a massive problem. The amount of medical research and data is exploding. It's impossible for any single doctor to keep up with it all. IBM's vision was for Watson to ingest millions of research articles, clinical trial results, and patient records to help oncologists make better, more personalized treatment recommendations.
Host: A truly revolutionary idea. But the study suggests that despite billions of dollars in investment, the reality was quite different.
Expert: That's right. Watson Health struggled significantly with profitability and adoption. The AI's recommendations weren't as reliable or as useful as promised, which created a critical crossroads for IBM. They had to decide whether to keep pouring money into this very specific healthcare vertical or to change their entire strategy.
Host: How did the researchers in this study approach such a complex business case?
Expert: The study is a deep strategic analysis. It examines IBM's business model, its technology, and the market environment. The authors reviewed everything from internal strategy components and partnerships with major cancer centers to the specific technological hurdles Watson faced. It's essentially a case study on the immense challenges of monetizing a "moonshot" AI project.
Host: Let's get into those challenges. What were some of the key findings?
Expert: A major one was model accuracy and bias. The study highlights that Watson was primarily trained using patient data from one institution, Memorial Sloan Kettering Cancer Center in the US. This meant its recommendations didn't always translate well to different patient populations, especially internationally.
Host: So, an AI trained in New York might not be effective for a patient in Tokyo or Mumbai?
Expert: Precisely. This revealed a significant algorithmic bias. For example, one finding mentioned in the analysis showed a mismatch rate of over 27% between Watson's suggestions and the actual treatments given to cervical cancer patients in China. That's a critical failure when you're dealing with patient health.
Host: That naturally leads to the issue of trust. How did doctors react to this new tool?
Expert: That was the second major hurdle: a lack of explainability. Doctors called it the 'black box' problem. Watson would provide a ranked list of treatments, but it couldn't clearly articulate the reasoning behind its top choice. Clinicians need to understand the 'why' to trust a recommendation, and without that transparency, adoption stalled.
Host: And beyond trust, were there practical, on-the-ground problems?
Expert: Absolutely. The study points to massive integration and scaling challenges. Integrating Watson into a hospital's existing complex workflows and electronic health records was incredibly difficult and expensive. The partnership with MD Anderson Cancer Center, for instance, struggled because Watson couldn't properly interpret doctors' unstructured notes. It wasn't a simple plug-and-play solution.
Host: This is a powerful story. For our listeners—business leaders, strategists, tech professionals—what's the big takeaway? Why does the Watson Health story matter for them?
Expert: There are a few key lessons. First, it's a cautionary tale about managing hype. IBM positioned Watson as a revolution, but the technology wasn't there yet. This created a gap between promise and reality that damaged its credibility.
Host: So, under-promise and over-deliver, even with exciting new tech. What else?
Expert: The second lesson is that technology, no matter how powerful, is not a substitute for deep domain expertise. The nuances of medicine—patient preferences, local treatment availability, the context of a doctor's notes—were things Watson struggled with. You can't just apply an algorithm to a complex field and expect it to work without genuine, human-level understanding.
Host: And what about that core strategic dilemma the study focuses on—this idea of a vertical versus a horizontal strategy?
Expert: This is the most critical takeaway for any business investing in AI. IBM chose a vertical strategy—a deep, specialized solution for one industry. The study shows how incredibly high-risk and expensive that can be. The alternative is a horizontal strategy: building a general, flexible AI platform that other companies can adapt for their own needs. It's a less risky, more scalable approach, and it’s the path that competitors like Google and Amazon have largely taken.
Host: So, to wrap it up: IBM's Watson Health was a bold and ambitious vision to transform cancer care with AI.
Host: But this analysis shows its struggles were rooted in very real-world problems: data bias, the 'black box' issue of trust, and immense practical challenges with integration.
Host: For business leaders, the story is a masterclass in the risks of a highly-specialized vertical AI strategy and a reminder that the most advanced technology is only as good as its understanding of the people and processes it's meant to serve.
Host: Alex, thank you so much for breaking down this complex topic for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Communications of the Association for Information Systems (2025)
Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability
Claude Chammaa, Fatma Fourati-Jamoussi, Lucian Ceapraz, Valérie Leroux
This study investigates the behavioral, contextual, and economic factors that influence French farmers' adoption of innovative agricultural technologies. Using a mixed-methods approach that combines qualitative interviews and quantitative surveys, the research proposes and validates the French Farming Innovation Adoption (FFIA) model, an agricultural adaptation of the UTAUT2 model, to explain technology usage.
Problem
The agricultural sector is rapidly transforming with digital innovation, but the factors driving technology adoption among farmers, particularly in cost-sensitive and highly regulated environments like France, are not fully understood. Existing technology acceptance models often fail to capture the central role of economic viability, leaving a gap in explaining how sustainability goals and policy supports translate into practical adoption.
Outcome
- The most significant direct predictor of technology adoption is 'Price Value'; farmers prioritize innovations they perceive as economically beneficial and cost-effective. - Traditional drivers like government subsidies (Facilitating Conditions), expected performance, and social influence do not directly impact technology use. Instead, their influence is indirect, mediated through the farmer's perception of the technology's price value. - Perceived sustainability benefits alone do not significantly drive adoption. For farmers to invest, environmental advantages must be clearly linked to economic gains, such as reduced costs or increased yields. - Economic appraisal is the critical filter through which farmers evaluate new technologies, making it the central consideration in their decision-making process.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business strategy. Today, we're digging into the world of smart farming.
Host: We're looking at a fascinating study called "Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability." It investigates what really makes farmers adopt new technologies. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: So, Alex, we hear a lot about Agriculture 4.0—drones, sensors, A.I. on the farm. But this study suggests it's not as simple as just building new tech. What's the real-world problem they're tackling?
Expert: Exactly. The big problem is that while technology offers huge potential, the factors driving adoption aren't well understood, especially in a place like France. French farmers are under immense pressure from complex regulations like the EU's Common Agricultural Policy and global trade deals.
Expert: They face a constant balancing act between sustainability goals, high production costs, and international competition. Previous models for technology adoption often missed the most critical piece of the puzzle for farmers: economic viability.
Host: So how did the researchers get to the heart of what farmers are actually thinking? What was their approach?
Expert: They used a really smart mixed-methods approach. First, they went out and conducted in-depth interviews with a dozen farmers to understand their real-world challenges and resistance to new tech. These conversations revealed frustrations with cost, complexity, and even digital anxiety.
Expert: Then, using those real-world insights, they designed a quantitative survey for 171 farmers who were already using innovative technologies. This allowed them to build and test a model that reflects the actual decision-making process on the ground.
Host: That sounds incredibly thorough. So, after talking to farmers and analyzing the data, what were the key findings? What really drives a farmer to invest in a new piece of technology?
Expert: The results were crystal clear on one thing: Price Value is king. The single most significant factor predicting whether a farmer will use a new technology is their perception of its economic benefit. Will it save or make them money? That's the first and most important question.
Host: That makes intuitive sense. But what about other factors, like government subsidies designed to encourage this, or seeing your neighbor use a new tool?
Expert: This is where it gets really interesting. Factors like government support, the technology’s expected performance, and even social influence from other farmers do not directly lead to adoption.
Host: Not at all? That's surprising.
Expert: Not directly. Their influence is indirect, and it's all filtered through that lens of Price Value. A government subsidy is only persuasive if it makes the technology profitable. A neighbor’s success only matters if it proves the economic case. If the numbers don't add up, these other factors have almost no impact.
Host: And the sustainability angle? Surely, promoting a greener way of farming is a major driver?
Expert: You'd think so, but the study found that perceived sustainability benefits alone do not significantly drive adoption. For a farmer to invest, environmental advantages must be clearly linked to an economic gain, like reducing fertilizer costs or increasing crop yields. Sustainability has to pay the bills.
Host: This is such a critical insight. Let's shift to the "so what" for our listeners. What are the key business takeaways from this?
Expert: For any business in the Agri-tech space, the message is simple: lead with the Return on Investment. Don't just sell fancy features or sustainability buzzwords. Your marketing, your sales pitch—it all has to clearly demonstrate the economic value. Frame environmental benefits as a happy consequence of a smart financial decision.
Host: And what about for policymakers?
Expert: Policymakers need to realize that subsidies aren't a magic bullet. To be effective, financial incentives must be paired with tools that prove the tech's value—things like cost-benefit calculators, technical support, and farmer-to-farmer demonstration programs. They have to connect the policy to the farmer's bottom line.
Expert: For everyone else, it’s a powerful lesson in understanding your customer's core motivation. You have to identify their critical decision filter. For French farmers, every innovation is judged by its economic impact. The question is, what’s the non-negotiable filter for your customers?
Host: A fantastic summary. So, to recap: for technology to truly take root in agriculture, it’s not enough to be innovative, popular, or even sustainable. It must first and foremost prove its economic worth. The bottom line truly is the bottom line.
Host: Alex, thank you so much for bringing these insights to life for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more research that’s shaping the future of business.
Communications of the Association for Information Systems (2025)
Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective
Pramod K. Patnaik, Kunal Rao, Gaurav Dixit
This study investigates the factors that enable the use of Generative AI (GenAI) tools in rural educational settings within developing countries. Using a mixed-method approach that combines in-depth interviews and the Grey DEMATEL decision-making method, the research identifies and analyzes these enablers through a socio-technical lens to understand their causal relationships.
Problem
Marginalized rural communities in developing countries face significant challenges in education, including a persistent digital divide that limits access to modern learning tools. This research addresses the gap in understanding how Generative AI can be practically leveraged to overcome these education-related challenges and improve learning quality in under-resourced regions.
Outcome
- The study identified fifteen key enablers for using Generative AI in rural education, grouped into social and technical categories. - 'Policy initiatives at the government level' was found to be the most critical enabler, directly influencing other key factors like GenAI training for teachers and students, community awareness, and school leadership commitment. - Six novel enablers were uncovered through interviews, including affordable internet data, affordable telecommunication networks, and the provision of subsidized devices for lower-income groups. - An empirical framework was developed to illustrate the causal relationships among the enablers, helping stakeholders prioritize interventions for effective GenAI adoption.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're looking at how Generative AI can transform education, not in Silicon Valley, but in some of the most under-resourced corners of the world.
Host: We're diving into a fascinating new study titled "Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective". It investigates the key factors that can help bring powerful AI tools to classrooms in developing countries. With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's a critical topic.
Host: Let's start with the big picture. What is the real-world problem this study is trying to solve?
Expert: The core problem is the digital divide. In many marginalized rural communities, especially in developing nations, students and teachers face huge educational challenges. We're talking about a lack of resources, infrastructure, and access to modern learning tools. While we see Generative AI changing industries in developed countries, there's a real risk these rural communities get left even further behind.
Host: So the question is, can GenAI be a bridge across that divide, instead of making it wider?
Expert: Exactly. The study specifically looks at how we can practically leverage these AI tools to overcome those long-standing challenges and actually improve the quality of education where it's needed most.
Host: So how did the researchers approach such a complex issue? It must be hard to study on the ground.
Expert: It is, and they used a really smart mixed-method approach. First, they went directly to the source, conducting in-depth interviews with teachers, government officials, and community members in rural India. This gave them rich, qualitative data—the real stories and challenges. Then, they took all the factors they identified and used a quantitative analysis to find the causal relationships between them.
Host: So it’s not just a list of problems, but a map of how one factor influences another?
Expert: Precisely. It allows them to say, 'If you want to achieve X, you first need to solve for Y'. It creates a clear roadmap for intervention.
Host: That sounds powerful. What were the key findings? What are the biggest levers we can pull?
Expert: The study identified fifteen key 'enablers', which are the critical ingredients for success. But the single most important finding, the one that drives almost everything else, is 'Policy initiatives at the government level'.
Host: That's surprising. I would have guessed something more technical, like internet access.
Expert: And that's crucial, but the study shows that strong government policy is the 'cause' factor. It directly enables other key things like funding, GenAI training for teachers and students, creating community awareness, and getting school leadership on board. Without that top-down strategic support, everything else struggles.
Host: What other enablers stood out?
Expert: The interviews uncovered some really practical, foundational needs that go beyond just theory. Things we might take for granted, like affordable internet data plans, reliable telecommunication networks, and providing subsidized devices like laptops or tablets for lower-income families. It highlights that access isn't just about availability; it’s about affordability.
Host: This is the most important question for our listeners, Alex. This research is clearly vital for educators and policymakers, but why should business professionals pay attention? What are the takeaways for them?
Expert: I see three major opportunities here. First, this study is essentially a market-entry roadmap for a massive, untapped audience. For EdTech companies, telecoms, and hardware manufacturers, it lays out exactly what is needed to succeed in these emerging markets. It points directly to opportunities for public-private partnerships to provide those subsidized devices and affordable data plans we just talked about.
Host: So it’s a blueprint for doing business in these regions.
Expert: Absolutely. Second, it's a guide for product development. The study found that 'ease of use' and 'localized language support' are critical enablers. This tells tech companies that you can't just parachute in a complex, English-only product. Your user interface needs to be simple, intuitive, and available in local languages to gain any traction. That’s a direct mandate for product and design teams.
Host: That makes perfect sense. What’s the third opportunity?
Expert: It redefines effective Corporate Social Responsibility, or CSR. Instead of just one-off donations, a company can use this framework to make strategic investments. They could fund teacher training programs or develop technical support hubs in rural areas. This creates sustainable, long-term impact, builds immense brand loyalty, and helps develop the very ecosystem their business will depend on in the future.
Host: So to sum it up: Generative AI holds incredible promise for bridging the educational divide in rural communities, but technology alone isn't the answer.
Expert: That's right. Success hinges on a foundation of supportive government policy, which then enables crucial factors like training, awareness, and true affordability.
Host: And for businesses, this isn't just a social issue—it’s a clear roadmap for market opportunity, product design, and creating strategic, high-impact investments. Alex, thank you so much for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business, technology, and groundbreaking research.
Generative AI, Rural, Education, Digital Divide, Interviews, Socio-technical Theory
Communications of the Association for Information Systems (2025)
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective
David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.
Problem
Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.
Outcome
- A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation. - Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals). - AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI. - The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study titled "Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective." Host: It explores why so many organizations seem to struggle with putting their responsible AI principles into actual practice, looking at the pressures, the challenges, and the key roles people play inside these companies. Host: With me is our analyst, Alex Ian Sutherland, who has taken a deep dive into this study. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, we hear a lot about AI ethics and all these new responsible AI frameworks. But this study suggests there’s a massive gap between what companies *say* they'll do and what they *actually* do. What's the core problem here? Expert: That's the central issue. The study finds that despite growing awareness of AI's risks, the principles often remain just that—principles on a webpage. This gap between policy and practice means the goals of making AI safe and ethical are not being met. Expert: This creates huge risks, not just for society, but directly for the businesses themselves. It undermines customer trust and leaves them exposed to future legal and reputational damage. Host: So how did the researchers approach such a complex organizational problem? Expert: They conducted a comprehensive literature review, synthesizing the findings from dozens of real-world, empirical studies on the topic. Then, they analyzed this collective evidence through a specific lens called neo-institutional theory. Host: That sounds a bit academic. Can you break that down for us? Expert: Absolutely. In simple terms, it's a way of understanding how organizations respond to external pressures—from society, from regulators—to appear legitimate. Sometimes, this means they adopt policies for show, even if their internal day-to-day work doesn't change. Host: That makes sense. It’s about looking the part. So, using that lens, what were the most significant findings from the study? Expert: There were three that really stood out. First, there's a fundamental tension at play. On one side, you have pressures pushing for responsible AI, like legal compliance and protecting the company's reputation. On the other, you have inhibitors, like market demand for AI that just *works*, regardless of ethics, and a lack of real accountability. Host: And this tension leads to problems? Expert: Exactly. It leads to something the study calls 'decoupling'. The most common form is 'policy-practice decoupling'. This is when a company adopts a great-sounding ethics policy, but the engineering teams on the ground never actually implement it. Expert: The second, more subtle form is 'means-end decoupling'. This is when teams *do* implement a practice, like a bias check, but it's done in a superficial way that doesn't actually achieve the ethical goal. It's essentially just ticking a box. Host: So there's a disconnect. What was the second key finding? Expert: It’s about the people on the ground: the AI practitioners. The study found they fall into two distinct roles. They are either 'institutional custodians' or 'institutional entrepreneurs'. Expert: 'Custodians' are those who resist change to protect existing practices. Think of a product manager who argues that ethical considerations slow down development and hurt performance. They maintain the status quo. Expert: 'Entrepreneurs', on the other hand, are the champions. They are the ones who passionately advocate for responsible AI, often taking it on themselves without a formal mandate because they believe it's the right thing to do. Host: Which leads us to the third point, which I imagine is that these champions can't do it alone? Expert: Precisely. The study concludes that this bottom-up approach, relying on a few passionate individuals, is not enough. For responsible AI to be effective, it requires strong, top-down organizational support, clear structures, and proactive processes. Host: This is the crucial part for our listeners. For a business leader, what are the practical takeaways here? Why does this matter? Expert: First, leaders need to conduct an honest assessment. Are your responsible AI efforts real, or are they just symbolic? Creating a policy to look good, without giving your teams the time, resources, and authority to implement it, is setting them—and the company—up for failure. Host: So it's about moving beyond lip service to avoid real business risk. Expert: Exactly. Second, find and empower your 'institutional entrepreneurs'. The study shows these champions often face immense stress and burnout. So, formalize their roles. Give them authority, a budget, and a direct line to leadership. Don't let their goodwill be the only thing powering your ethics strategy. Host: And the final takeaway? Expert: Be proactive, not reactive. You can't bolt on ethics at the end. The study suggests building responsible AI structures that are both centralized and decentralized. A central team can provide resources and set standards, but you also need experts embedded *within* each development team to manage risks from the very beginning. Host: That’s incredibly clear. So, to summarize: there's a major gap between AI policy and practice, driven by competing business pressures. This results in actions that are often just for show. Host: And while passionate employees can drive change from the bottom up, they will ultimately fail without sincere, structural support from leadership. Host: Alex, thank you so much for breaking down this complex but incredibly important study for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Journal of the Association for Information Systems (2026)
Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability
Karen Stendal, Maung K. Sein, Devinder Thapa
This study explores how individuals with lifelong disabilities (PWLD) use virtual worlds, specifically Second Life, to achieve social inclusion. Using a qualitative approach with in-depth interviews and participant observation, the researchers analyzed how PWLD experience the platform's features. The goal was to develop a model explaining the process through which technology facilitates greater community participation and interpersonal connection for this marginalized group.
Problem
People with lifelong disabilities often face significant social isolation and exclusion due to physical, mental, or sensory impairments that hinder their full participation in society. This lack of social connection can negatively impact their psychological and emotional well-being. This research addresses the gap in understanding the specific mechanisms by which technology, like virtual worlds, can help this population move from isolation to inclusion.
Outcome
- Virtual worlds offer five key 'affordances' (action possibilities) that empower people with lifelong disabilities (PWLD). - Three 'functional' affordances were identified: Communicability (interacting without barriers like hearing loss), Mobility (moving freely without physical limitations), and Personalizability (controlling one's digital appearance and whether to disclose a disability). - These functional capabilities enable two 'social' affordances: Engageability (the ability to join in social activities) and Self-Actualizability (the ability to realize one's potential and help others). - The study proposes an 'Affordance-Based Pathway Model' which shows how using these features helps PWLD build interpersonal relationships and participate in communities, leading to social inclusion.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers, and with me today is our expert analyst, Alex Ian Sutherland. Host: Alex, today we're diving into a fascinating study from the Journal of the Association for Information Systems titled, "Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability". Host: In short, it explores how people with lifelong disabilities use virtual worlds, like the platform Second Life, to achieve social inclusion and build community. Host: So, Alex, before we get into the virtual world, let's talk about the real world. What is the core problem this study is trying to address? Expert: Anna, it addresses a significant challenge. People with lifelong disabilities often face profound social isolation. Physical, mental, or sensory barriers can prevent them from fully participating in society, which in turn impacts their psychological and emotional well-being. Expert: While we know technology can help, there’s been a gap in understanding the specific mechanisms—the 'how'—technology can create a pathway from isolation to inclusion for this group. Host: It sounds like a complex challenge to study. So how did the researchers approach this? Expert: They took a very human-centered approach. They went directly into the virtual world of Second Life and conducted in-depth interviews and participant observations with 18 people with lifelong disabilities. This allowed them to understand the lived experiences of both new and experienced users. Host: And what did they find? What is it about these virtual worlds that makes such a difference? Expert: They discovered that the platform offers five key 'affordances'—which is simply a term for the action possibilities or opportunities that the technology makes possible for these users. They grouped them into two categories: functional and social. Host: Okay, five key opportunities. Can you break down the first category, the functional ones, for us? Expert: Absolutely. The first three are foundational. There’s 'Communicability'—the ability to interact without barriers. One participant with hearing loss noted that text chat made it easier to interact because they didn't need sign language. Expert: Second is 'Mobility'. This is about moving freely without physical limitations. A participant who uses a wheelchair in real life shared this powerful thought: "In real life I can't dance; here I can dance with the stars." Expert: The third is 'Personalizability'. This is the user's ability to control their digital appearance through an avatar, and importantly, to choose whether or not to disclose their disability. It puts them in control of their identity. Host: So those three—Communicability, Mobility, and Personalizability—are the functional building blocks. How do they lead to actual social connection? Expert: They directly enable the two 'social' affordances. The first is 'Engageability'—the ability to actually join in social activities and be part of a group. Expert: This then leads to the final and perhaps most profound affordance: 'Self-Actualizability'. This is the ability to realize one's potential and contribute to the well-being of others. For example, a retired teacher in the study found new purpose in helping new users get started on the platform. Host: This is incredibly powerful on a human level. But Alex, this is a business and technology podcast. What are the practical takeaways here for business leaders? Expert: This is where it gets very relevant. First, for any company building in the metaverse or developing collaborative digital platforms, this study is a roadmap for truly inclusive design. It shows that you need to intentionally design for features that enhance communication, freedom of movement, and user personalization. Host: So it's a model for product development in these new digital spaces. Expert: Exactly. And it also highlights an often-overlooked user base. Designing for inclusivity isn't just a social good; it opens up your product to a massive global market. Businesses can also apply these principles internally to create more inclusive remote work environments, ensuring employees with disabilities can fully participate in digital collaboration and company culture. Host: That’s a fantastic point about corporate applications. Is there anything else? Expert: Yes, and this is a critical takeaway. The study emphasizes that technology alone is not a magic bullet. The users succeeded because of what the researchers call 'facilitating conditions'—things like peer support, user training, and community helpers. Expert: For businesses, the lesson is clear: you can't just launch a product. You need to build and foster the support ecosystem and the community around it to ensure users can truly unlock its value. Host: Let’s recap then. Virtual worlds can be a powerful tool for social inclusion by providing five key opportunities: three functional ones that enable two social ones. Host: And for businesses, the key takeaways are to design intentionally for inclusivity, recognize this valuable user base, and remember to build the support system, not just the technology itself. Host: Alex Ian Sutherland, thank you for breaking this down for us. It’s a powerful reminder that technology is ultimately about people. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge.
Social Inclusion, Virtual Worlds (VW), People With Lifelong Disability (PWLD), Affordances, Second Life, Assistive Technology, Qualitative Study
MIS Quarterly Executive (2022)
Self-Sovereign Identity and Verifiable Credentials in Your Digital Wallet
Mary Lacity, Erran Carmel
This paper provides an overview of Self-Sovereign Identity (SSI), a decentralized approach for issuing, holding, and verifying digital credentials. Through an analysis of the technology's architecture and a case study of the UK's National Health Service (NHS), the authors explain SSI's business value, implementation, and potential risks for IT leaders.
Problem
Current digital identity systems are centralized, meaning individuals lack control over their own credentials like licenses, diplomas, or work histories. This creates inefficiencies for businesses (e.g., slow employee onboarding), high costs associated with password management, and significant cybersecurity risks as centralized databases are prime targets for data breaches and identity theft.
Outcome
- Self-Sovereign Identity (SSI) empowers individuals to possess and control their own digital proofs of credentials in a secure digital wallet on their smartphone. - SSI can dramatically improve business efficiency by streamlining processes like employee onboarding, reducing a multi-day manual verification process to a few minutes, as seen in the NHS case study. - The technology enhances privacy by enabling data minimization, allowing users to prove a specific attribute (e.g., being over 21) without revealing unnecessary personal information like their full date of birth or address. - For organizations, SSI reduces cybersecurity risks and costs by eliminating centralized credential databases and the need for password resets. - While promising, SSI is an emerging technology with risks including the need for widespread ecosystem adoption, the development of sustainable economic models, and ensuring robust cybersecurity for individual wallets.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a study from MIS Quarterly Executive titled "Self-Sovereign Identity and Verifiable Credentials in Your Digital Wallet." Host: It explores a decentralized approach for managing digital credentials, analyzing its business value, how it's implemented, and the potential risks for today’s IT leaders. Here to help us unpack it is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, before we get into the solution, let's talk about the problem. Most of us don't really think about how our digital identity is managed today, but this study suggests it's a huge issue. What’s wrong with the current system? Expert: The problem is that our digital identities are completely fragmented and controlled by others. Think about your physical wallet. You have a driver's license, maybe a university ID, a credit card. You control that wallet. Online, it’s the opposite. Your "credentials" are spread across countless organizations, each with its own username and password. Expert: The study points out that the average internet user has around 150 online accounts. For businesses, managing all these separate identities is inefficient and incredibly risky. These centralized databases of user data are what the study calls "honey pots," making them prime targets for data breaches. Host: So it's a headache for us as individuals, and a massive security liability for companies. Expert: Exactly. And it’s expensive. The research mentions that a single corporate password reset costs a company, on average, seventy dollars. When you scale that up, the costs become astronomical, not to mention the slow, manual processes for things like employee onboarding. Host: So, the study explores a new approach called Self-Sovereign Identity, or SSI. How did the researchers go about studying this emerging technology? Expert: This wasn't a lab experiment. The authors spent two years deeply engaged with the communities developing SSI. They interviewed leaders and conducted detailed case studies of early adopters, most notably the U.K.’s National Health Service, or NHS. This gives us a real-world view of how the technology works in a massive, complex organization. Host: That NHS case sounds fascinating. Let's get to the key findings. What is the big idea behind Self-Sovereign Identity? Expert: The core idea is to give control back to the individual. With SSI, you hold your own official, verifiable credentials—like your university degree or professional licenses—in a secure digital wallet on your smartphone. You decide exactly what information to share, and with whom. Host: So instead of a potential employer having to call my university to verify my degree, I could just prove it to them directly from my phone in an instant? Expert: Precisely. And that leads to the second key finding: a dramatic boost in business efficiency. The NHS, for example, processes over a million staff transfers between its hospitals each year. The old, paper-based onboarding process took days. The study found that with an SSI-based "digital staff passport," that process was cut down to just a few minutes. Host: From days to minutes is a huge leap. But what about privacy? Does this mean we're sharing even more personal data from our phones? Expert: It’s actually the opposite, which is the third major finding: enhanced privacy through what's called 'data minimization'. The study gives a classic example: proving you're old enough to buy a drink. Right now, you show your driver's license, which reveals your name, address, and full date of birth. The bartender only needs to know if you’re over 21. Expert: With an SSI wallet, you could provide a verifiable, cryptographic proof that simply says "Yes, this person is over 21," without revealing any of that other sensitive data. You only share what is absolutely necessary for the transaction. Host: That's a powerful concept. So for businesses, the value is efficiency, but also security, right? Expert: Right. That's the final key finding. By moving away from centralized databases, companies reduce their cybersecurity risk profile. They are no longer the 'honey pot' for hackers. It removes the liability of storing millions of user credentials and cuts the operational costs of things like password management. Host: This all sounds truly transformative. Let's focus on the bottom line. What are the key takeaways for business leaders listening today? Why should they care about SSI right now? Expert: The most immediate application is for streamlining any business process that relies on verifying credentials. We saw it with employee onboarding at the NHS, but this could apply to customer verification in banking, compliance checks in supply chains, or membership verification. Host: And it seems like a great way to build trust with customers. Expert: Absolutely. In an era of constant data breaches, offering your customers a more private and secure way to interact is a significant competitive advantage. But the study is also clear that this isn't a silver bullet. It's an emerging technology. Host: What are the main risks businesses need to consider? Expert: The biggest challenge is ecosystem adoption. For SSI to be truly useful, you need a critical mass of organizations issuing credentials, and organizations accepting them. There are also still questions to be solved around sustainable economic models and ensuring the security of the individual's digital wallet is foolproof. Host: So it's a long-term strategic play, not something you can just switch on tomorrow. Expert: Exactly. The study’s key advice for leaders is to start learning and exploring this space now. An interesting tip from the NHS project was this: when you talk about it, focus on the business problem you're solving—efficiency, security, and trust. That's what gets buy-in. Host: Alright, Alex, let’s wrap it up. To summarize, the current way we manage digital identity is inefficient and insecure. Self-Sovereign Identity puts control back into the hands of the individual through a secure digital wallet. Host: For businesses, this means faster processes, lower cyber risks, and a powerful new way to build customer trust. While it's still early days, now is the time for leaders to get educated and start planning for this shift. Host: Alex, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we explore another big idea shaping the future of business.
Self-Sovereign Identity (SSI), Verifiable Credentials, Digital Wallet, Decentralized Identity, Identity Management, Digital Trust, Blockchain
MIS Quarterly Executive (2022)
Using Lessons from the COVID-19 Crisis to Move from Traditional to Adaptive IT Governance
Heiko Gewald, Heinz-Theo Wagner
This study analyzes how IT governance structures in nine international companies, particularly in regulated industries, were adapted during the COVID-19 crisis. It investigates the shift from rigid, formal governance to more flexible, relational models that enabled rapid decision-making. The paper provides recommendations on how to integrate these crisis-mode efficiencies to create a more adaptive IT governance system for post-crisis operations.
Problem
Traditional IT governance systems are often slow, bureaucratic, and focused on control and risk avoidance, which makes them ineffective during a crisis requiring speed and flexibility. The COVID-19 pandemic exposed this weakness, as companies found their existing processes were too rigid to handle the sudden need for digital transformation and remote work. The study addresses how organizations can evolve their governance to be more agile without sacrificing regulatory compliance.
Outcome
- Companies successfully adapted during the crisis by adopting leaner decision-making structures with fewer participants. - The influence of IT experts in decision-making increased significantly, shifting the focus from risk-avoidance to finding the best functional solutions. - Formal controls were complemented or replaced by relational governance based on social interaction, trust, and collaboration, which proved to be more efficient. - The paper recommends permanently adopting these changes to create an 'adaptive IT governance' system that balances flexibility with compliance, ultimately delivering more business value.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're looking at a fascinating question that emerged from the chaos of the recent global crisis: How did companies manage to pivot so fast, and what can we learn from it? Host: We’re diving into a study from MIS Quarterly Executive titled, "Using Lessons from the COVID-19 Crisis to Move from Traditional to Adaptive IT Governance." With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: To start, this study analyzed how major international companies, especially in regulated fields, adapted their IT governance during the pandemic. It’s about moving from rigid rules to more flexible, relationship-based models that allowed them to act fast. Host: So Alex, let's set the stage. What was the big problem with IT governance that the pandemic put under a microscope? Expert: The core problem was that traditional IT governance had become slow, bureaucratic, and obsessed with avoiding risk. Think of huge committees, endless meetings, and layers of approvals for even minor IT decisions. Host: A process designed for stability, not speed. Expert: Exactly. One CIO from a global bank in the study said, “We are way too slow in making decisions, specifically when it comes to IT decisions.” These systems were built to satisfy regulators and protect managers from liability, not to create business value or respond to a crisis. Host: And then a crisis hit that demanded exactly that: speed and flexibility. Expert: Right. Suddenly, the entire workforce needed to go remote, which was a massive IT challenge. The old, slow governance models were a roadblock. The study found that another CIO sarcastically described his pre-crisis committees as having "ten lawyers for every IT member." That kind of structure just couldn't work. Host: So how did the researchers get inside these companies to understand what changed? Expert: They conducted in-depth interviews with CIOs and business managers from nine large international companies in sectors like banking, auditing, and insurance. They did this at two key moments: once in mid-2020, in the thick of the crisis, and again at the end of 2021 as things were returning to a new normal. Host: That gives a great before-and-after picture. So, what were the key findings? What actually happened inside these organizations? Expert: Three big things stood out. First, companies created leaner decision-making structures. The slow, multi-layered committees were replaced by small, empowered crisis teams, often called Disaster Response Groups or DRGs. Host: Fewer cooks in the kitchen. Expert: Precisely. One bank restricted its DRG to a core team of just five managers. They adopted what the CIO called a "'one meeting per decision' routine." This allowed them to make critical choices about things like video conferencing and VPN technology in hours, not months. Host: A radical change. What was the second key finding? Expert: The influence of IT experts shot up. In the old model, their voices were often diluted. During the crisis, IT leaders were central to the decision-making groups. The focus shifted from "what is the least risky option?" to "what is the best functional solution to keep the business running?" Host: So the people who actually understood the technology were empowered to solve the problem. Expert: Yes. As one CIO from an auditing firm put it, "It was classic business/IT alignment. The business described the problem and we, the IT department, provided the best solution." Host: And the third major finding? Expert: This is perhaps the most interesting. Formal controls were replaced by what the study calls 'relational governance'. Instead of relying on thick binders of rules, teams started relying on social interaction, trust, and collaboration. Host: It became more about people and relationships. Expert: Exactly. A CIO from a financial services firm said, “We do not exchange lengthy documents anymore; instead, we actually talk to each other.” This trust-based approach proved to be far more efficient and flexible than the rigid, control-focused systems they had before. Host: This is the crucial part for our listeners, Alex. How can businesses apply these crisis-mode lessons now, without a crisis forcing their hand? What’s the big takeaway? Expert: The main takeaway is that companies shouldn't just go back to the old way of doing things. They have a golden opportunity to build what the study calls an 'adaptive IT governance' system. Host: And what does that look like in practice? Expert: First, make those lean decision-making structures permanent. Keep committees small, focused, and empowered. Strive for that "one meeting per decision" mindset. Second, permanently increase the influence of your IT experts. Ensure they are at the table and have real decision-making power, not just an advisory role. Host: So it’s about institutionalizing the speed and expertise you discovered during the crisis. Expert: Right. And finally, it's about striking a new balance between formal rules and relational trust. You still need rules, especially in regulated industries, but you can reduce them to a necessary minimum and complement them with governance based on collaboration and mutual trust. It’s less about top-down control and more about shared goals. Host: So it’s not about throwing out the rulebook, but about creating a smarter, more flexible one that allows you to be agile while still being compliant. Expert: That's the core message. The crisis proved that this approach delivers better results, faster. Now is the time to make it the new standard. Host: A powerful lesson indeed. To summarize for our audience: the pandemic forced companies to abandon slow, risk-averse IT governance. The keys to their success were leaner decision-making, empowering IT experts, and shifting from rigid rules to trust-based collaboration. The challenge now is to make those changes permanent to create a more adaptive and value-driven organization. Host: Alex Ian Sutherland, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping the future of business.
Building an Artificial Intelligence Explanation Capability
Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.
Problem
Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.
Outcome
- AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value). - To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX). - The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI). - Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a critical question for any company implementing artificial intelligence. Our guide is a fascinating study from MIS Quarterly Executive titled “Building an Artificial Intelligence Explanation Capability.” Host: It introduces the idea that to succeed with AI, companies need a new core competency: the ability to explain how and why their AI makes the decisions it does. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are pouring billions into AI, but many projects never see the light of day. What’s the core problem this study identifies? Expert: The core problem is trust. Business leaders are struggling with the "black box" nature of modern AI. When you have an algorithm making crucial decisions—about loans, hiring, or tax compliance—and you can't explain its logic, you have a massive risk management problem. Expert: The study points to real-world examples, like systems showing bias in parole decisions or incorrectly calculating government benefits. This opacity makes it incredibly difficult to manage risks, prevent negative consequences, and frankly, prove to executives that the AI is even creating business value. Host: So the black box is holding back real-world adoption. How did the researchers approach this problem? Expert: Instead of just staying in the lab, they went into the field. The study is built on deep case studies of two major organizations: the Australian Taxation Office, or ATO, and General Electric. They examined how these companies were actually deploying AI and overcoming these exact challenges. Host: And what did they find? What were the key takeaways from seeing AI in action at that scale? Expert: They found that AI presents four distinct challenges. First is 'Model Opacity,' which is that black box problem we just discussed. Second is 'Model Drift,' the tendency for an AI's performance to get worse over time as the real world changes. Expert: Third is 'Mindless Actions'—an AI will follow its programming, even if the context changes and its actions no longer make sense. And finally, the 'Unproven Nature of AI,' which is the difficulty in clearly connecting an AI project to bottom-line results. Host: That’s a powerful list of hurdles. So how do successful organizations get over them? Expert: By deliberately building what the study calls an "AI Explanation Capability," or AIX. It's not a piece of software; it's an organizational skill. And it has four key dimensions. Host: Okay, let's walk through them. What’s the first one? Expert: The first is 'Decision Tracing.' This is the ability to connect the dots from the data an AI receives to the output it produces. It's about making the model understandable, not just to data scientists, but to business managers and regulators. Host: The second? Expert: 'Bias Remediation.' This is about actively hunting for and fixing unfairness in your models. It involves careful data selection, systematic auditing, and ensuring the AI is representative of the populations it serves. Host: That sounds critical for any customer-facing AI. What about the third dimension? Expert: 'Boundary Setting.' This means defining the safe operating limits for the AI. It’s about knowing when a human needs to step in. The AI isn't the final judge; it’s a tool to support human experts, and you have to build the workflow around that principle. Host: And the final dimension of this capability? Expert: 'Value Formulation.' This is arguably the most important for business leaders. It’s the ability to articulate, measure, and prove the business value of the AI. It's not enough for it to be clever; it has to be valuable. Host: This is the core of the episode, Alex. Why does building this 'AIX' capability matter so much for businesses listening right now? Expert: Because it reframes the challenge. Success with AI isn't just a technical problem; it's an organizational one. The study shows that technology is only half the battle. Expert: Look at the Australian Taxation Office. They had to explain their AI to regulators. So, they used a simple, easy-to-understand model to validate the decisions of a more complex, "black box" neural network. This built trust because they could prove the advanced AI was behaving rationally. Host: So they built a bridge from the old way to the new way. What about General Electric? Expert: At GE, they were using AI to check contractor safety documents—a very high-stakes task. They built a system where their human safety experts could easily see the evidence the AI used for its assessment and could override it. They created a true human-in-the-loop system, effectively setting those boundaries we talked about. Host: So the key takeaway for our listeners is that deploying AI requires building a support structure around it? Expert: Exactly. It's about building a cross-functional team. You need your data scientists, but you also need your domain experts, your business leaders, and your legal team all working together to trace decisions, remediate bias, set boundaries, and prove value. AI cannot succeed in a silo. Host: A powerful conclusion. Let’s summarize. To unlock the value of AI and overcome its inherent risks, businesses can’t just buy technology. They must build a new organizational muscle—an AI Explanation Capability. Host: This means focusing on Decision Tracing, Bias Remediation, Boundary Setting, and Value Formulation. It’s a holistic approach that puts people and processes at the center of AI deployment. Host: Alex, thank you for making this complex topic so clear and actionable. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to bridge the gap between academia and business.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation
MIS Quarterly Executive (2021)
Unexpected Benefits from a Shadow Environmental Management Information System
Johann Kranz, Marina Fiedler, Anna Seidler, Kim Strunk, Anne Ixmeier
This study analyzes a German chemical company where a single employee, outside of the formal IT department, developed an Environmental Management Information System (EMIS). The paper examines how this grassroots 'shadow IT' project was successfully adopted company-wide, producing both planned and unexpected benefits. The findings are used to provide recommendations for business leaders on how to effectively implement information systems that drive both eco-sustainability and business value.
Problem
Many companies struggle to effectively improve their environmental sustainability because critical information is often inaccessible, fragmented across different departments, or simply doesn't exist. This information gap prevents decision-makers from getting a unified view of their products' environmental impact, making it difficult to turn sustainability goals into concrete actions and strategic advantages.
Outcome
- Greater Product Transparency: The system made it easy for employees to assess the environmental impact of materials and products. - Improved Environmental Footprint: The company improved its energy and water efficiency, reduced carbon emissions, and increased waste productivity. - Strategic Differentiation: The system provided a competitive advantage by enabling the company to meet growing customer demand for verified sustainable products, leading to increased sales and market share. - Increased Profitability: Sustainable products became surprisingly profitable, contributing to higher turnover and outperforming competitors. - More Robust Sourcing: The system helped identify supply chain risks, such as the scarcity of key raw materials, prompting proactive strategies to ensure resource availability. - Empowered Employees: The tool spurred an increase in bottom-up, employee-driven sustainability initiatives beyond core business operations.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Unexpected Benefits from a Shadow Environmental Management Information System." Host: It explores how a grassroots 'shadow IT' project, developed by a single employee at a German chemical company, was successfully adopted company-wide, producing some truly surprising benefits for both sustainability and the bottom line. Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. Many companies talk about sustainability, but struggle to put it into practice. What's the core problem this study addresses? Expert: The core problem is an information gap. The study highlights that in most companies, critical environmental data is scattered across different departments, siloed in various systems, or just doesn't exist in a usable format. Host: Meaning decision-makers are flying blind? Expert: Exactly. Without a unified view of a product’s entire lifecycle—from raw materials to finished goods—it's incredibly difficult to turn sustainability goals into concrete actions. You can't improve what you can't measure. Host: So how did the researchers in this study approach this problem? Expert: They conducted an in-depth case study of a major German chemical company, which they call 'ChemCo'. Over a 13-year period, they interviewed employees, managers, and even competitors. Expert: They traced the journey of an Environmental Management Information System, or EMIS, that was created not by the IT department, but by one motivated manager in supply chain management during his own time. Host: A classic 'shadow IT' project, then. What were the key findings from this bottom-up approach? Expert: Well, there were the planned benefits, and then the unexpected ones, which are really powerful. The first, as you’d expect, was greater product transparency. Host: So, employees could finally see the environmental impact of different materials. Expert: Right. And that led directly to an improved environmental footprint. The data showed the company was able to improve energy and water efficiency and reduce waste. For instance, they found a way to turn 6,000 tons of onion processing waste into renewable biogas energy. Host: That’s a great tangible outcome. But you mentioned unexpected benefits? Expert: This is where it gets interesting for business leaders. The first was strategic differentiation. Armed with this data, ChemCo could prove its sustainability claims to customers. This became a massive competitive advantage. Host: Which I imagine translated directly into sales. Expert: It did, and that was the second surprise: a significant increase in profitability. Sustainable products, which are often seen as a cost center, became highly profitable. The study shows ChemCo’s sales and profit growth actually outperformed its three main competitors over a decade. Host: So doing good was also good for business. What else? Expert: Two more big things. The system helped them identify supply chain risks, like the growing scarcity of a key material like sandalwood, which prompted them to find sustainable alternatives years before their rivals. And finally, it empowered employees, sparking a wave of bottom-up sustainability initiatives across the company. Host: This is a powerful story. For the business professionals listening, what is the most important lesson here? Why does this study matter? Expert: The biggest takeaway is about innovation. This whole transformation wasn't driven by a big, top-down corporate mandate. It was driven by a passionate employee who built a simple tool to solve a problem he saw. Host: But 'shadow IT' is often seen as a risk by leadership. Expert: It can be. But this study urges leaders to see these initiatives as opportunities. They often highlight an unmet business need. The lesson is not to shut them down, but to nurture them. Host: So the advice is to find those innovators within your own ranks and empower them? Expert: Precisely. And the second key lesson is to keep it simple. This revolutionary system started as a spreadsheet. Its simplicity and accessibility were crucial. Anyone could use it and contribute information, which broke down those data silos we talked about earlier. Host: It sounds like the value was in democratizing the data, making sustainability everyone’s job. Expert: That's the perfect way to put it. It created a shared language and a shared mission that ultimately changed the company’s culture and strategy. Host: So, to summarize: a grassroots, employee-driven IT project not only improved a company's environmental footprint but also drove profitability, uncovered supply chain risks, and created a lasting competitive advantage. Host: The key for business leaders is to embrace these bottom-up innovations and understand that sometimes the simplest tools can have the most transformative impact. Host: Alex, thank you for breaking this down for us. It’s a powerful reminder that the next big idea might just be brewing in a spreadsheet on an employee's laptop. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we uncover more valuable knowledge for your business.
Environmental Management Information System (EMIS), Shadow IT, Corporate Sustainability, Eco-sustainability, Case Study, Strategic Value, Supply Chain Transparency
MIS Quarterly Executive (2025)
Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development
Ersin Dincelli, Haadi Jafarian
This study explores how an 'agentic metaverse'—an immersive virtual world powered by intelligent AI agents—can be used for cybersecurity training. The researchers presented an AI-driven metaverse prototype to 53 cybersecurity professionals to gather qualitative feedback on its potential for transforming workforce development.
Problem
Traditional cybersecurity training methods, such as classroom instruction and static online courses, are struggling to keep up with the fast-evolving threat landscape and high demand for skilled professionals. These conventional approaches often lack the realism and adaptivity needed to prepare individuals for the complex, high-pressure situations they face in the real world, contributing to a persistent skills gap.
Outcome
- The concept of an AI-driven agentic metaverse for training was met with strong enthusiasm, with 92% of professionals believing it would be effective for professional training. - Key challenges to implementing this technology include significant infrastructure demands, the complexity of designing realistic AI-driven scenarios, ensuring security and privacy, and managing user adoption. - The study identified five core challenges: infrastructure, multi-agent scenario design, security/privacy, governance of social dynamics, and change management. - Six practical recommendations are provided for organizations to guide implementation, focusing on building a scalable infrastructure, developing realistic training scenarios, and embedding security, privacy, and safety by design.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development." With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: This study sounds like it’s straight out of science fiction. Can you break it down for us? What exactly is an ‘agentic metaverse’ and how does it relate to cybersecurity training? Expert: Absolutely. Think of it as a super-smart, immersive virtual world. The 'metaverse' part is the 3D, interactive environment, like a sophisticated simulation. The 'agentic' part means it's populated by intelligent AI agents that can think, adapt, and act on their own to create dynamic training scenarios. Host: So, we're talking about a virtual reality training ground run by AI. Why is this needed? What's wrong with how we train cybersecurity professionals right now? Expert: That’s the core of the problem the study addresses. The cyber threat landscape is evolving at an incredible pace. Traditional methods, like classroom lectures or static online courses, just can't keep up. Host: They’re too slow? Expert: Exactly. They lack realism and the ability to adapt. Real cyber attacks are high-pressure, collaborative, and unpredictable. A multiple-choice quiz doesn’t prepare you for that. This contributes to a massive global skills gap and high burnout rates among professionals. We need a way to train for the real world, in a safe environment. Host: So how did the researchers actually test this idea of an agentic metaverse? Expert: They built a functional prototype. It was an AI-driven, 3D environment that simulated cybersecurity incidents. They then presented this prototype to a group of 53 experienced cybersecurity professionals to get their direct feedback. Host: They let the experts kick the tires, so to speak. Expert: Precisely. The professionals could see firsthand how AI agents could play the role of attackers, colleagues, or even mentors, creating quests and scenarios that adapt in real-time based on the trainee's actions. It makes abstract threats feel tangible and urgent. Host: And what was the verdict from these professionals? Were they impressed? Expert: The response was overwhelmingly positive. A massive 92% of them believed this approach would be effective for professional training. They highlighted how engaging and realistic the scenarios felt, calling it a "great learning tool." Host: That’s a strong endorsement. But I imagine it’s not all smooth sailing. What are the hurdles to actually implementing this in a business? Expert: You're right. The enthusiasm was matched with a healthy dose of pragmatism. The study identified five core challenges for businesses to consider. Host: And what are they? Expert: First, infrastructure. Running a persistent, immersive 3D world with multiple AIs is computationally expensive. Second is scenario design. Creating AI-driven narratives that are both realistic and effective for learning is incredibly complex. Host: That makes sense. It's not just programming; it's like directing an intelligent, interactive movie. Expert: Exactly. The other key challenges were ensuring security and privacy within the training environment itself, managing the social dynamics in an immersive world, and finally, the big one: change management and user adoption. There's a learning curve, especially for employees who aren't gamers. Host: This is the crucial question for our listeners, Alex. Given those challenges, why should a business leader care? What are the practical takeaways here? Expert: This is where the study provides a clear roadmap. The biggest takeaway is that this technology can create a hyper-realistic, safe space for your teams to practice against advanced threats. It's like a flight simulator for cyber defenders. Host: So it moves training from theory to practice. Expert: It’s a complete shift. The AI agents can simulate anything from a phishing attack to a nation-state adversary, adapting their tactics based on your team's response. This allows you to identify skills gaps proactively and build real muscle memory for crisis situations. Host: What's the first step for a company that finds this interesting? Expert: The study recommends starting with small, focused pilot programs. Don't try to build a massive corporate metaverse overnight. Target a specific, high-priority training need, like incident response for a junior analyst team. Measure the results, prove the value, and then scale. Host: And it’s crucial to involve more than just the IT department, right? Expert: Absolutely. This has to be a cross-functional effort. You need your cybersecurity experts, your AI developers, your instructional designers from HR, and legal to think about privacy from day one. It's about building a scalable, secure, and truly effective training ecosystem. The payoff is a more resilient and adaptive workforce. Host: A fascinating look into the future of professional development. So, to sum it up: traditional cybersecurity training is falling behind. The 'agentic metaverse' offers a dynamic, AI-powered solution that’s highly realistic and engaging. While significant challenges in infrastructure and design exist, the potential to effectively close the skills gap is immense. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.
Agentic Metaverse, Cybersecurity Training, Workforce Development, AI Agents, Immersive Learning, Virtual Reality, Training Simulation
MIS Quarterly Executive (2025)
Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition
Laura Bayor, Christoph Weinert, Tina Ilek, Christian Maier, Tim Weitzel
This study explores the integration of Artificial Intelligence (AI) into the talent acquisition (TA) process to guide organizations toward a better future of work. Using a Delphi study with C-level TA experts, the research identifies, evaluates, and categorizes AI opportunities and challenges into possible, probable, and preferable futures, offering actionable recommendations.
Problem
Acquiring skilled employees is a major challenge for businesses, and traditional talent acquisition processes are often labor-intensive and inefficient. While AI offers a solution, many organizations are uncertain about how to effectively integrate it, facing the risk of falling behind competitors if they fail to adopt the right strategies.
Outcome
- The study identifies three primary business goals for integrating AI into talent acquisition: finding the best-fit candidates, making HR tasks more efficient, and attracting new applicants. - Key preferable AI opportunities include automated interview scheduling, AI-assisted applicant ranking, identifying and reaching out to passive candidates ('cold talent'), and optimizing job posting content for better reach and diversity. - Significant challenges that organizations must mitigate include data privacy and security issues, employee and stakeholder distrust of AI, technical integration hurdles, potential for bias in AI systems, and ethical concerns. - The paper recommends immediate actions such as implementing AI recommendation agents and chatbots, and future actions like standardizing internal data, ensuring AI transparency, and establishing clear lines of accountability for AI-driven hiring decisions.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into the world of hiring and recruitment. Finding the right talent is more competitive than ever, and many are looking to artificial intelligence for an edge. Host: To help us understand this, we’re joined by our expert analyst, Alex Ian Sutherland. Alex, you’ve been looking at a new study on this topic. Expert: That's right, Anna. It’s titled "Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition." Host: That's a mouthful! In simple terms, what's it about? Expert: It’s essentially a strategic guide for businesses. It explores how to thoughtfully integrate AI into the talent acquisition process to build a better, more effective future of work. Host: Let’s start with the big picture. What is the core business problem this study is trying to solve? Expert: The problem is twofold. First, acquiring skilled employees is a massive challenge. Traditional hiring is often slow, manual, and incredibly labor-intensive. Recruiters are overwhelmed. Host: I think many of our listeners can relate to that. What’s the second part? Expert: The second part is that while AI seems like the obvious solution, most organizations don't know where to start or what to prioritize. The study highlights that 76% of HR leaders believe their company will fall behind the competition if they don't adopt AI quickly. The risk isn't just about failing to adopt, but failing to adopt the *right* strategies. Host: So it's about being smart with AI, not just using it for the sake of it. How did the researchers figure out what those smart strategies are? Expert: They used a fascinating method called a Delphi study. Host: Can you break that down for us? Expert: Of course. They brought together a panel of C-level executives—real experts who make strategic hiring decisions every day. Through several rounds of structured, anonymous surveys, they identified and ranked the most critical AI opportunities and challenges. This process builds a strong consensus on what’s just hype versus what is actually feasible and beneficial right now. Host: A consensus from the experts. I like that. So what were the key findings? What are the most promising opportunities for AI in hiring? Expert: The study calls them "preferable" opportunities. Four really stand out. First, automated interview scheduling, which frees up a huge amount of administrative time. Expert: Second is AI-assisted applicant ranking. This helps recruiters quickly identify the most promising candidates from a large pool, letting them focus their energy on the best fits. Host: So it helps them find the needle in the haystack. What else? Expert: Third, identifying and reaching out to what the study calls 'cold talent.' These are passive candidates—people who aren't actively job hunting but are perfect for a role. AI can be great at finding them. Expert: And finally, optimizing the content of job postings. AI can help craft descriptions that attract a more diverse and qualified range of applicants. Host: Those are some powerful applications. But with AI, there are always challenges. What did the experts identify as the biggest hurdles? Expert: The big three were, first, data privacy and security—which is non-negotiable. Second, the potential for bias in AI systems; we have to be careful not to just automate past mistakes. Expert: And the third, which is more of a human factor, is employee and stakeholder distrust. If your team doesn't trust the tools, they won't use them effectively, no matter how powerful they are. Host: That brings us to the most important question for our audience: why does this matter for my business? How do we turn these findings into action? Expert: This is where the study becomes a real playbook. It recommends framing your AI strategy around one of three primary business goals. Are you trying to find the *best-fit* candidates, make your HR tasks more *efficient*, or simply *attract more* applicants? Host: Okay, so let's take one. If my goal is to make my HR team more efficient, what's a concrete first step I can take based on this study? Expert: For efficiency, the immediate recommendation is to implement chatbots and automated support systems. A chatbot can handle routine applicant questions 24/7, and an AI scheduler can handle the back-and-forth of booking interviews. This frees up your human team for high-value work, like building relationships with top candidates. Host: That’s a clear, immediate action. What if my goal is finding that perfect 'best-fit' candidate? Expert: Then you should look at implementing AI recommendation agents. These tools can analyze resumes and internal data to suggest matching jobs to applicants or even recommend career paths to your current employees, helping with internal mobility. Host: And what about the long-term view? What should businesses be planning for over the next few years? Expert: Looking ahead, the focus must be on building a strong foundation. This means standardizing your internal data so the AI has clean, reliable information to learn from. Expert: It also means prioritizing transparency and accountability. You need to be able to explain why an AI made a certain recommendation, and you must have clear lines of responsibility for AI-driven hiring decisions. Building that trust is key to long-term success. Host: This has been incredibly clear, Alex. So, to summarize for our listeners: successfully using AI in hiring requires a deliberate strategy. Host: It starts with defining a clear business goal—whether it's efficiency, quality of hire, or volume of applicants. Host: From there, you can implement immediate tools like chatbots and schedulers, while building a long-term foundation based on good data, transparency, and accountability. Host: Alex Ian Sutherland, thank you for translating this complex topic into such actionable insights. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. Join us next time as we continue to explore the future of business and technology.
Artificial Intelligence, Talent Acquisition, Human Resources, Recruitment, Delphi Study, Future of Work, Strategic HR Management
Proceedings of the 59th Hawaii International Conference on System Sciences (2026)
Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance
Antonia Wurzer, Sophie Hartl, Sandro Franzoi, Jan vom Brocke
This study investigates how regulatory changes, once embedded in a company's information systems, affect the dynamics of business processes. Using digital trace data from a European financial institution's trade order process combined with qualitative interviews, the researchers identified patterns between the implementation of new regulations and changes in process performance indicators.
Problem
In highly regulated industries like finance, organizations must constantly adapt their operations to evolving external regulations. However, there is little understanding of the dynamic, real-world effects that implementing these regulatory changes within IT systems has on the execution and performance of business processes over time.
Outcome
- Implementing regulatory changes in IT systems dynamically affects business processes, causing performance indicators to shift immediately or with a time delay. - Contextual factors, such as employee experience and the quality of training, significantly shape how processes adapt; insufficient training after a change can lead to more errors, process loops, and violations. - Different types of regulations (e.g., content-based vs. function-based) produce distinct impacts, with some streamlining processes and others increasing rework and complexity for employees. - The study highlights the need for businesses to move beyond a static view of compliance and proactively manage the dynamic interplay between regulation, system design, and user behavior.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating study titled "Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance." Host: In short, it explores what really happens to a company's day-to-day operations after a new regulation is coded into its IT systems. With me to break it down is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. Businesses in fields like finance are constantly dealing with new rules. What's the specific problem this study decided to tackle? Expert: The problem is that most companies treat compliance as a finish line. A new regulation comes out, they update their software, and they consider the job done. But they have very little visibility into what happens next. How does that change *actually* affect employees? Does it make their work smoother or more complicated? Does it create hidden risks or inefficiencies? Expert: This study addresses that gap. It looks at the dynamic, real-world ripple effects that these system changes have on business processes over time, which is something organizations have struggled to understand. Host: So it’s about the unintended consequences. How did the researchers go about measuring these ripples? Expert: They used a really clever dual approach. First, they analyzed what's called digital trace data. Think of it as the digital footprint employees leave behind when doing their jobs. They analyzed nearly 17,000 trade order processes from a European financial institution over six months. Expert: But data alone doesn't tell the whole story. So, they combined that quantitative data with qualitative insights—talking to the actual employees, the process owners and business analysts, to understand the context behind the numbers. This let them see not just *what* was happening, but *why*. Host: That combination of data and human insight sounds powerful. What were some of the key findings? Expert: There were three big ones. First, the impact of a change isn't always immediate. Sometimes a system update causes a sudden spike in problems, but other times the negative effects are delayed and pop up weeks later. It's not a simple cause-and-effect. Host: And the second finding? Expert: This one is crucial: the human factor matters immensely. The study found that things like employee experience and, most importantly, the quality of training had a huge impact on how processes adapted. Host: Can you give us an example? Expert: Absolutely. After one regulatory change related to ESG reporting was implemented, the data showed a sharp increase in the number of steps employees took to complete a task, and more process violations. The interviews revealed why: there was no structured training for the change. Employees were confused by a subtly altered interface, which led them to make more errors, repeat steps, and get frustrated. Host: So a small system update, without proper support, can actually hurt productivity. What was the final key finding? Expert: That not all regulatory changes are created equal. The study found that different types of regulations create very different outcomes. A change that automated the generation of a required document actually streamlined the process, making it leaner with fewer reworks. Expert: But in contrast, a change that added new manual tick-boxes for users to fill out increased complexity and rework, because employees found themselves having to go back and complete the new fields repeatedly. Host: This is incredibly practical. Let's move to the most important question for our listeners: why does this matter for their business? What are the key takeaways? Expert: The number one takeaway is to move beyond a static view of compliance. Implementing a change in your IT system isn't the end of the process; it's the beginning. Leaders need to proactively monitor how these changes are affecting workflows on the ground, and this study shows they can use their own system data to do it. Host: So, use your data to see the real impact. What's the next takeaway? Expert: Invest in change management, especially training. You can spend millions on a compliant system, but if you don't prepare your people, you could actually lower efficiency and increase errors. The study provides clear evidence that a lack of training directly leads to process loops and mistakes. A simple, proactive training plan is not a cost—it's an investment against future risk and inefficiency. Host: That’s a powerful point. And the final piece of advice? Expert: Understand the nature of the change before you implement it. Ask your teams: is this update automating a task for our employees, or is it adding a new manual burden? Answering that simple question can help you predict whether the change will be a helpful streamline or a frustrating new bottleneck, and you can plan your support and training accordingly. Host: Fantastic insights. So, to summarize for our listeners: compliance is a dynamic, ongoing process, not a one-time fix. The human factor, especially training, is absolutely critical to success. And finally, understanding the type of regulatory change can help you predict its true impact on your business. Host: Alex Ian Sutherland, thank you for making this complex study so clear and actionable for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable research for your business.
Process Science, Regulation, Change, Business Processes, Digital Trace Data, Dynamics
Communications of the Association for Information Systems (2025)
Implementing AI into ERP Software
Siar Sarferaz
This study investigates how to systematically integrate Artificial Intelligence (AI) into complex Enterprise Resource Planning (ERP) systems. Through an analysis of real-world use cases, the author identifies key challenges and proposes a comprehensive DevOps (Development and Operations) framework to standardize and streamline the entire lifecycle of AI applications within an ERP environment.
Problem
While integrating AI into ERP software offers immense potential for automation and optimization, organizations lack a systematic approach to do so. This absence of a standardized framework leads to inconsistent, inefficient, and costly implementations, creating significant barriers to adopting AI capabilities at scale within enterprise systems.
Outcome
- Identified 20 specific, recurring gaps in the development and operation of AI applications within ERP systems, including complex setup, heterogeneous development, and insufficient monitoring. - Developed a comprehensive DevOps framework that standardizes the entire AI lifecycle into six stages: Create, Check, Configure, Train, Deploy, and Monitor. - The proposed framework provides a systematic, self-service approach for business users to manage AI models, reducing the reliance on specialized technical teams and lowering the total cost of ownership. - A quantitative evaluation across 10 real-world AI scenarios demonstrated that the framework reduced processing time by 27%, increased cost savings by 17%, and improved outcome quality by 15%.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating study titled "Implementing AI into ERP Software," which looks at how businesses can systematically integrate Artificial Intelligence into their core operational systems.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, great to have you.
Expert: Thanks for having me, Anna.
Host: Let's start with the big picture. ERP systems are the digital backbone of so many companies, managing everything from finance to supply chains. And everyone is talking about AI. It seems like a perfect match, but this study suggests it's not that simple. What's the real-world problem here?
Expert: Exactly. The potential is massive, but the execution is often chaotic. The core problem is that most organizations lack a standardized playbook for embedding AI into these incredibly complex ERP systems. This leads to implementations that are inconsistent, inefficient, and very costly.
Host: Can you give us a concrete example of that chaos?
Expert: Absolutely. The study identified 20 recurring problems, or 'gaps'. For instance, one gap they called 'Heterogeneous Development'. They found cases where a company's supply chain team would build a demand forecasting model using one set of AI tools, while the sales team built a similar model for price optimization using a completely different, incompatible set of tools.
Host: So, they're essentially reinventing the wheel in different departments, driving up costs and effort.
Expert: Precisely. Another major issue is the 'Need for AI Expertise'. Business users are told a model is, say, 85% accurate, but they have no way to know if that's good enough for their specific inventory decisions. They become completely dependent on expensive technical teams for every step.
Host: So how did the research approach solving such a complex and widespread problem?
Expert: Instead of just theorizing, the author analyzed numerous real-world AI use cases within a major ERP environment. They systematically documented what was going wrong in practice—all those gaps we mentioned—and used that direct evidence to design and build a practical framework to fix them.
Host: A solution born from real-world challenges. I like that. So what were the key findings? What did this new framework look like?
Expert: The main outcome is a comprehensive DevOps framework that standardizes the entire lifecycle of an AI model into six clear stages.
Host: Okay, what are those stages?
Expert: They are: Create, Check, Configure, Train, Deploy, and Monitor. Think of it as a universal assembly line for AI applications. The 'Create' stage is for development, but the 'Check' stage is crucial—it automatically verifies if you even have the right quality and amount of data before you start.
Host: That sounds like it would prevent a lot of failed projects right from the beginning.
Expert: It does. And the later stages, like 'Train' and 'Deploy', are designed as self-service tools. This empowers a business user, not just a data scientist, to retrain a model or roll it back to a previous version with a few clicks. It dramatically reduces the reliance on specialized teams.
Host: This is the part our listeners are waiting for, Alex. Why does this framework matter for business? What are the tangible benefits of adopting this kind of systematic approach?
Expert: This is where it gets really compelling. The study evaluated the framework's performance across 10 real-world AI scenarios and the results were significant. They saw a 27% reduction in processing time.
Host: So you get your AI-powered insights almost a third faster.
Expert: Exactly. They also measured a 17% increase in cost savings. By eliminating that duplicated effort and streamlining the process, the total cost of ownership for these AI features drops.
Host: A direct impact on the bottom line. And what about the quality of the results?
Expert: That improved as well. They found a 15% improvement in outcome quality. This means the AI is making better predictions and smarter recommendations, which leads to better business decisions—whether that's optimizing inventory, predicting delivery delays, or detecting fraud.
Host: So it's faster, cheaper, and better. It sounds like this framework is what turns AI from a series of complex science experiments into a scalable, reliable business capability.
Expert: That's the perfect way to put it. It provides the governance and standardization needed to move from a few one-off AI projects to an enterprise-wide strategy where AI is truly integrated into the core of the business.
Host: Fantastic insights, Alex. So, to summarize for our listeners: integrating AI into ERP systems has been challenging and chaotic. This study identified the key gaps and proposed a six-stage framework—Create, Check, Configure, Train, Deploy, and Monitor—to standardize the process. The business impact is clear: significant gains in speed, cost savings, and the quality of outcomes.
Host: Alex Ian Sutherland, thank you so much for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Enterprise Resource Planning, Artificial Intelligence, DevOps, Software Integration, AI Development, AI Operations, Enterprise AI
International Conference on Wirtschaftsinformatik (2025)
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law
Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.
Problem
While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.
Outcome
- Transparency, such as providing clear source citations, was a key factor in building user trust. - Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust. - Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness. - A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating new study called “Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law.” Host: It explores a huge question: In a specialized, high-stakes field like tax law, what makes a professional actually trust an AI assistant? And how can we design AI that people will actually use? With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. We hear a lot about AI's potential, but this study highlights a major roadblock, especially in professional fields. What's the core problem they're addressing? Expert: The core problem is trust. Generative AI can be incredibly powerful for tasks like legal research, which requires sifting through constantly changing laws and rulings. But these tools can also make mistakes, invent sources—what we call 'hallucinations'—and their reasoning can be a total 'black box.' Host: And in tax law, a mistake isn't just a typo. Expert: Exactly. As the study points out, a misplaced trust in an AI’s output can lead to severe financial penalties for a client, or even malpractice litigation for the attorney. When the stakes are that high, you're not going to use a tool you don't fundamentally trust. That lack of trust is the biggest barrier to adoption. Host: So how did the researchers measure something as subjective as trust? What was their approach? Expert: They used a really clever mixed-methods approach. They built two different prototypes of a Generative AI tax assistant. The first was a basic, no-frills tool. The second prototype was designed specifically to build trust. Host: How so? What was different about it? Expert: It had features we'll talk about in a moment. They then had a group of legal experts perform real-world tax research tasks using both prototypes. Afterwards, the researchers gathered feedback through detailed questionnaires and in-depth interviews to see which version the experts trusted more, and why. Host: A direct head-to-head comparison. I love that. So, what were the key findings? What are the secret ingredients for building a trustworthy AI? Expert: The results were incredibly clear, and they came down to three main factors. First, transparency was paramount. The prototype that clearly cited its sources for every piece of information was trusted far more. Host: So users could check the AI's work, essentially. Expert: Precisely. One expert in the study was quoted as saying the system was "definitely more trustworthy, precisely because the sources have been specified." It gives the user a sense of control and verification. Host: That makes perfect sense. What was the second factor? Expert: The second was what the study calls 'anthropomorphism'—basically, making the AI feel more human-like. The more trusted prototype had a conversational greeting and a familiar chat layout. Experts said it made them feel "more familiar and better supported." Host: It’s interesting that a simple design choice can have such a big impact on trust. Expert: It is. And the third factor was just as fascinating: the AI’s honesty about its own limitations. Host: You mean the AI admitting what it *can't* do? Expert: Yes. The trusted prototype included an introduction that mentioned its capabilities and its limits. The experts saw this not as a weakness, but as a sign of reliability. Being upfront about its boundaries actually made the AI seem more trustworthy. Host: Transparency, a human touch, and a bit of humility. It sounds like a recipe for a good human colleague, not just an AI. Alex, let's get to the bottom line. What does this all mean for business leaders listening right now? Expert: This is the most important part. For any business implementing AI, especially for expert users, this study provides a clear roadmap. The biggest takeaway is that you have to design for trust, not just for function. Host: What does that look like in practice? Expert: It means for any AI that provides information—whether to your legal team, your financial analysts, or your engineers—it must be able to show its work. Building in transparent, clickable source citations isn't an optional feature; it's essential for adoption. Host: Okay, so transparency is job one. What else? Expert: Don't underestimate the user interface. A sterile, purely functional tool might be technically perfect, but a more conversational and intuitive design can significantly lower the barrier to entry and make users more comfortable. User experience directly impacts trust. Host: And that third point about limitations seems critical for managing expectations. Expert: Absolutely. Be upfront with your teams about what your new AI tool is good at and where it might struggle. Marketing might want to sell it as a magic bullet, but for actual adoption, managing expectations and being honest about limitations builds the long-term trust you need for the tool to succeed. Host: So, to recap for our listeners: if you're rolling out AI tools, the key to getting your teams to actually use them is building trust. And you do that through transparency, like citing sources; a thoughtful, human-centric design; and being honest about the AI’s limitations. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates
David Blomeyer and Sebastian Köffer
This study examines the supply of entrepreneurial and technical talent from German universities and analyzes their migration patterns after graduation. Using LinkedIn alumni data for 43 universities, the research identifies key locations for talent production and evaluates how effectively different cities and federal states retain or attract these skilled workers.
Problem
Amidst a growing demand for skilled workers, particularly for startups, companies and policymakers lack clear data on talent distribution and mobility in Germany. This information gap makes it difficult to devise effective recruitment strategies, choose business locations, and create policies that foster regional talent retention and economic growth.
Outcome
- Universities in major cities, especially TU München and LMU München, produce the highest number of graduates with entrepreneurial and technical skills. - Talent retention varies significantly by location; universities in major metropolitan areas like Berlin, Munich, and Hamburg are most successful at keeping their graduates locally, with FU Berlin retaining 68.8% of its entrepreneurial alumni. - The tech hotspots of North Rhine-Westphalia (NRW), Bavaria, and Berlin retain an above-average number of their own graduates while also attracting a large share of talent from other regions. - Bavaria is strong in both educating and attracting talent, whereas NRW, the largest producer of talent, also loses a significant number of graduates to other hotspots. - The analysis reveals that hotspot regions are generally better at retaining entrepreneurial profiles than technical profiles, highlighting the influence of local startup ecosystems on talent mobility.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In today's competitive landscape, finding the right talent can make or break a business. But where do you find them? Today, we're diving into a fascinating study titled "Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates." Host: In short, it examines where Germany's top entrepreneurial and tech talent comes from, and more importantly, where it goes after graduation. With me to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What's the real-world problem this study is trying to solve? Expert: The problem is a significant information gap. Germany has a huge demand for skilled workers, especially in STEM fields—we're talking a gap of over 300,000 specialists. Startups, in particular, need this talent to scale. But companies and even regional governments don't have clear data on where these graduates are concentrated and how they move around the country. Host: So they’re flying blind when it comes to recruitment or deciding where to set up a new office? Expert: Exactly. Without this data, it's hard to build effective recruitment strategies or create policies that help a region hold on to the talent it educates. This study gives us a map of Germany's brain circulation for the first time. Host: How did the researchers create this map? What was their approach? Expert: It was quite innovative. They used a massive and publicly available dataset: LinkedIn alumni pages. They analyzed over 2.4 million alumni profiles from 43 major German universities. Host: And how did they identify the specific talent they were looking for? Expert: They created two key profiles. First, the 'Entrepreneurial Profile,' using keywords like Founder, Startup, or Business Development. Second, the 'Technical Profile,' with keywords like IT, Engineering, or Digital. Then, they tracked the current location of these graduates to see who stays, who leaves, and where they go. Host: A digital breadcrumb trail for talent. So, what were the key findings? Where is the talent coming from? Expert: Unsurprisingly, universities in major cities are the biggest producers. The undisputed leader is Munich. The Technical University of Munich, TU München, produces the highest number of both entrepreneurial and technical graduates in the entire country. Host: So Munich is the top talent factory. But the crucial question is, does the talent stay there? Expert: That's where it gets interesting. The study found that talent retention varies massively. Again, the big metropolitan areas—Berlin, Munich, and Hamburg—are the most successful at keeping their graduates. Freie Universität Berlin, for example, retains nearly 69% of its entrepreneurial alumni right there in the city. That's an incredibly high rate. Host: That is high. And what about the bigger picture, at the state level? Are there specific regions that are winning the war for talent? Expert: Yes, the study identifies three clear hotspots: Bavaria, Berlin, and North Rhine-Westphalia, or NRW. They not only retain a high number of their own graduates, but they also act as magnets, pulling in talent from all over Germany. Host: And are these hotspots all the same? Expert: Not at all. Bavaria is a true powerhouse—it's strong in both educating and attracting talent. NRW is the largest producer of skilled graduates, but it also has a "brain drain" problem, losing a lot of its talent to the other two hotspots. And Berlin is a massive talent magnet, with almost half of its entrepreneurial workforce having migrated there from other states. Host: This is all fascinating, Alex, but let's get to the bottom line. Why does this matter for the business professionals listening to our show? Expert: This is a strategic roadmap for businesses. For recruitment, it means you can move beyond simple university rankings. This data tells you where specific talent pools are geographically concentrated. Need experienced engineers? The data points squarely to Munich. Looking for entrepreneurial thinkers? Berlin is a giant hub of attracted, not just homegrown, talent. Host: So it helps companies focus their hiring efforts. What about for bigger decisions, like choosing a business location? Expert: Absolutely. This study helps you understand the dynamics of a regional talent market. Bavaria offers a stable, locally-grown talent pool. Berlin is incredibly dynamic but relies on its power to attract people, which could be vulnerable to competition. A company in NRW needs to know it’s competing directly with Berlin and Munich for its best people. Host: So it's about understanding the long-term sustainability of the local talent pipeline. Expert: Precisely. It also has huge implications for investors and policymakers. It reveals which regions are getting the best return on their educational investments. It shows where to invest to build up a local startup ecosystem that can actually hold on to the bright minds it helps create. Host: So, to sum it up: we now have a much clearer picture of Germany's talent landscape. Universities in big cities are the incubators, but major hotspots like Berlin and Bavaria are the magnets that ultimately attract and retain them. Expert: That's right. It's not just about who has the best universities, but who has the best ecosystem to keep the graduates those universities produce. Host: A crucial insight for any business looking to grow. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in. Join us next time for more on A.I.S. Insights — powered by Living Knowledge.
International Conference on Wirtschaftsinformatik (2025)
There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability
Feline Schnaak, Katharina Breiter, Henner Gimpel
This study develops a structured framework to organize the growing field of artificial intelligence for environmental sustainability (AIfES). Through an iterative process involving literature reviews and real-world examples, the researchers created a multi-layer taxonomy. This framework is designed to help analyze and categorize AI systems based on their context, technical setup, and usage.
Problem
Artificial intelligence is recognized as a powerful tool for promoting environmental sustainability, but the existing research and applications are fragmented and lack a cohesive structure. This disorganization makes it difficult for researchers and businesses to holistically understand, compare, and develop effective AI solutions. There is a clear need for a systematic framework to guide the analysis and deployment of AI in this critical domain.
Outcome
- The study introduces a comprehensive, multi-layer taxonomy for AI systems for environmental sustainability (AIfES). - This taxonomy is structured into three layers: context (the sustainability challenge), AI setup (the technology and data), and usage (risks and end-users). - It provides a systematic tool for researchers, developers, and policymakers to analyze, classify, and benchmark AI applications, enhancing transparency and understanding. - The framework supports the responsible design and development of impactful AI solutions by highlighting key dimensions and characteristics for evaluation.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability". Host: With me is our expert analyst, Alex Ian Sutherland, who has explored this research. Alex, welcome. Expert: Great to be here, Anna. Host: To start, this study aims to create a structured framework for the growing field of AI for environmental sustainability. Can you set the stage for us? What's the big problem it’s trying to solve? Expert: Absolutely. Everyone is talking about using AI to tackle climate change, but the field is incredibly fragmented. It's a collection of great ideas, but without a cohesive structure. Host: So it's like having a lot of puzzle pieces but no picture on the box to guide you? Expert: That's a perfect analogy. For businesses, this disorganization makes it difficult to understand the landscape, compare different AI solutions, or decide where to invest for the biggest impact. This study addresses that by creating a clear, systematic map of the territory. Host: A map sounds incredibly useful. How did the researchers go about creating one for such a complex and fast-moving area? Expert: They used a very practical, iterative approach. They didn't just build a theoretical model. Instead, they conducted a rigorous review of existing scientific literature and then cross-referenced those findings with dozens of real-world AI applications from innovative companies. Expert: By moving back and forth between academic theory and real-world examples, they refined their framework over five distinct cycles to ensure it was both comprehensive and grounded in reality. Host: And the result of that process is what they call a 'multi-layer taxonomy'. It sounds a bit technical, but I have a feeling you can simplify it for us. Expert: Of course. The final framework is organized into three simple layers. Think of them as three essential questions you'd ask about any AI sustainability tool. Host: I like that. What's the first question? Expert: The first is the 'Context Layer', and it asks: What environmental problem are we solving? This identifies which of the UN's Sustainable Development Goals the AI addresses, like clean water or climate action, and the specific topic, like agriculture, energy, or pollution. Host: Okay, so that’s the 'what'. What’s next? Expert: The second is the 'AI Setup Layer'. This asks: How does the technology actually work? It looks at the technical foundation—the type of AI, where its data comes from, be it satellites or sensors, and how that data is accessed. It’s the nuts and bolts. Host: The 'what' and the 'how'. That leaves the third layer. Expert: The third is the 'Usage Layer', which asks: Who is this for, and what are the risks? This is crucial. It defines the end-users—governments, companies, or individuals—and evaluates the system's potential risks, helping to guide responsible development. Host: This framework brings a lot of clarity. So, let’s get to the most important question for our audience: why does this matter for business leaders? Expert: It matters because this framework is essentially a strategic toolkit. First, it provides a common language. Your tech team, sustainability officers, and marketing department can finally get on the same page. Host: That alone sounds incredibly valuable. Expert: It is. Second, it's a guide for design and evaluation. If you're developing a new product, you can use this structure to align your solution with a real sustainability strategy, identify technical needs, and pinpoint your target customers right from the start. Host: So it helps businesses build better, more focused sustainable products. Expert: Exactly. And it also helps them innovate by spotting new opportunities. By mapping existing solutions, a business can easily see where the market is crowded and, more importantly, where the gaps are. It can point the way to underexplored areas ripe for innovation. Expert: For example, the study highlights a tool that uses computer vision on a tractor to spray herbicide only on weeds, not crops. The framework makes its value crystal clear: the context is sustainable agriculture. The setup is AI vision. The user is the farming company. It builds a powerful business case. Host: So, this is far more than just an academic exercise. It's a practical roadmap for businesses looking to make a real, measurable impact with AI. Host: The study tackles the fragmented world of AI for sustainability by offering a clear, three-layer framework—Context, AI Setup, and Usage—to help businesses design, evaluate, and innovate responsibly. Host: Alex Ian Sutherland, thank you for making this complex topic so accessible. Expert: My pleasure, Anna. Host: And to our listeners, thank you for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we translate another key study into business intelligence.
Artificial Intelligence, AI for Sustainability, Environmental Sustainability, Green IS, Taxonomy
International Conference on Wirtschaftsinformatik (2025)
Towards the Acceptance of Virtual Reality Technology for Cyclists
Sophia Elsholz, Paul Neumeyer, and Rüdiger Zarnekow
This study investigates the factors that influence cyclists' willingness to adopt virtual reality (VR) for indoor training. Using a survey of 314 recreational and competitive cyclists, the research applies an extended Technology Acceptance Model (TAM) to determine what makes VR appealing for platforms like Zwift.
Problem
While digital indoor cycling platforms exist, they lack the full immersion that VR can offer. However, it is unclear whether cyclists would actually accept and use VR technology, as its potential in sports remains largely theoretical and the specific factors driving adoption in cycling are unknown.
Outcome
- Perceived enjoyment is the single most important factor determining if a cyclist will adopt VR for training. - Perceived usefulness, or the belief that VR will improve training performance, is also a strong predictor of acceptance. - Surprisingly, the perceived ease of use of the VR technology did not significantly influence a cyclist's intention to use it. - Social factors, such as the opinions of other athletes and trainers, along with a cyclist's general openness to new technology, positively contribute to their acceptance of VR. - Both recreational and competitive cyclists showed similar levels of acceptance, indicating a broad potential market, but both groups are currently skeptical about VR's ability to improve performance.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge with real-world business strategy. I'm your host, Anna Ivy Summers. Host: Today, we're gearing up to talk about the intersection of fitness and immersive technology. We're diving into a fascinating study called "Towards the Acceptance of Virtual Reality Technology for Cyclists." Host: It explores what makes cyclists, both amateur and pro, willing to adopt VR for their indoor training routines. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. People are already using platforms like Zwift for indoor cycling. What's the problem this study is trying to solve? Expert: That's the perfect place to start. Those platforms are popular, but they're still fundamentally a 2D screen experience. The big problem is that while VR promises a much more immersive, realistic training session, its potential in sports is still largely theoretical. Expert: Companies are hesitant to invest millions in developing VR cycling apps because they simply don't know if cyclists will actually use them. We need to understand the 'why' behind adoption before the 'what' gets built. Host: So it’s about closing that gap between a cool idea and a viable product. How did the researchers go about figuring out what cyclists want? Expert: They took a very methodical approach. They conducted a detailed survey with 314 cyclists, ranging from recreational riders to competitive athletes. Expert: They used a framework called the Technology Acceptance Model, or TAM, which they extended for this specific purpose. Essentially, it's a way to measure the key psychological factors that make someone decide to use a new piece of tech. Expert: They didn't just look at whether it's useful or easy to use. They also measured the impact of perceived enjoyment, a cyclist's general openness to new tech, and even social pressure from trainers and other athletes. Host: And after surveying all those cyclists, what were the most surprising findings? Expert: There were a few real eye-openers. First and foremost, the single most important factor for adoption wasn't performance gains—it was perceived enjoyment. Host: You mean, it has to be fun? More so than effective? Expert: Exactly. The data shows that if the experience isn't fun, cyclists won't be interested. This suggests they see VR cycling as a 'hedonic' system—one used for enjoyment—rather than a purely utilitarian training tool. Usefulness was the second biggest factor, but fun came first. Host: That is interesting. What else stood out? Expert: The biggest surprise was what *didn't* matter. The perceived ease of use of the VR technology had no significant direct impact on a cyclist's intention to adopt it. Host: So, they don't mind if it's a bit complicated to set up, as long as the experience is worth it? Expert: Precisely. They're willing to overcome a technical hurdle if the payoff in enjoyment and usefulness is there. The study also confirmed that social factors are key—what your teammates and coach think about the tech really does influence your willingness to try it. Host: This is where it gets critical for our listeners. Alex, what does this all mean for business? What are the key takeaways for a company in the fitness tech space? Expert: This study provides a clear roadmap. The first takeaway is: lead with fun. Your marketing, your design, your user experience—it all has to be built around creating an engaging and enjoyable world. Forget sterile lab simulations; think gamified adventures. Host: So sell the experience, not just the specs. Expert: Exactly. The second takeaway addresses the usefulness problem. The study found that cyclists are currently skeptical that VR can actually improve their performance. So, a business needs to explicitly educate the market. Expert: This means developing and promoting features that offer clear performance benefits you can't get elsewhere—like real-time feedback on your pedaling technique or the ability to practice a specific, difficult segment of a real-world race course in VR. Host: That sounds like a powerful marketing angle. You're not just riding; you're gaining a competitive edge. Expert: It is. And the final key takeaway is to leverage the community. Since social norms are so influential, businesses should target teams, clubs, and coaches. A positive review from a respected trainer could be more valuable than a massive ad campaign. Build community features that encourage social interaction and friendly competition. Host: Fantastic insights, Alex. So, to summarize for our business leaders: to succeed in the VR cycling market, the winning formula is to first make it fun, then prove it makes you faster, and finally, empower the community to spread the word. Expert: You've got it. It's about balancing the enjoyment with tangible, marketable benefits. Host: Thank you so much for breaking that down for us, Alex. It's clear that understanding the user is the first and most important lap in this race. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more actionable insights from the world of research.
Technology Acceptance, TAM, Cycling, Extended Reality, XR
International Conference on Wirtschaftsinformatik (2025)
Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry
Bastian Brechtelsbauer
This study details the design of a system to monitor organizational change projects, using insights from an action design research project with two large German manufacturing companies. The methodology involved developing and evaluating a prototype system, which includes a questionnaire-based survey and an interactive dashboard for data visualization and analysis.
Problem
Effectively managing organizational change is crucial for company survival, yet it is notoriously difficult to track and oversee. There is a significant research gap and lack of practical guidance on how to design information technology systems that can successfully monitor change projects to improve transparency and support decision-making for managers.
Outcome
- Developed a prototype change project monitoring system consisting of surveys and an interactive dashboard to track key indicators like change readiness, acceptance, and implementation. - Identified four key design challenges: balancing user effort vs. insight depth, managing standardization vs. adaptability, creating a realistic understanding of data quantification, and establishing a shared vision for the tool. - Proposed three generalized requirements for change monitoring systems: they must provide information tailored to different user groups, be usable for various types of change projects, and conserve scarce resources during organizational change. - Outlined eight design principles to guide development, focusing on both the system's features (e.g., modularity, intuitive visualizations) and the design process (e.g., involving stakeholders, communicating a clear vision).
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating new study titled "Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry". It explores how to build better tools to keep track of major organizational change. With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let’s start with the big picture. We all know companies are constantly changing, but why is monitoring that change such a critical problem to solve right now?
Expert: It's a huge issue. Think about the pressures on a major industry like German manufacturing, which this study focuses on. They're dealing with digital transformation, new sustainability goals, and intense global competition. Thriving, or even just surviving, means constant adaptation.
Host: And that adaptation is managed through change projects.
Expert: Exactly. Projects like restructuring departments, adopting new technologies, or shifting the entire company culture. The problem is, these are incredibly complex and expensive, yet managers often lack a clear, real-time view of what’s actually happening on the ground. They’re trying to navigate a storm without a compass.
Host: So they’re relying on gut feeling rather than data.
Expert: For the most part, yes. There's been a real lack of practical guidance on how to design an IT system that can properly monitor these projects, track employee sentiment, and give leaders the data they need to make better decisions. This study aimed to fill that gap.
Host: How did the researchers approach such a complex problem? What was their method?
Expert: Well, this wasn't a purely theoretical exercise. The researchers took a hands-on approach. They partnered directly with two large German manufacturing companies to co-develop a prototype system from the ground up.
Host: So they built something real and tested it?
Expert: Precisely. They created a system that has two main parts. First, a series of questionnaires to regularly survey employees about the change project—things like their readiness for the change, how well they feel supported, and their overall acceptance. Second, they built an interactive dashboard that visualizes all that survey data, so managers can see trends and drill down into specific areas or departments.
Host: That sounds incredibly useful. What were the key findings after they developed this prototype?
Expert: The first finding is that this type of system can work and provide immense value. But the second, and perhaps more interesting finding, was about the challenges they faced in designing it. It's not as simple as just building a dashboard.
Host: What kind of challenges?
Expert: They identified four main ones. First was balancing user effort against the depth of insight. You want detailed data, but you can’t overwhelm employees with constant, lengthy surveys.
Host: That makes sense. What else?
Expert: Second, managing standardization versus adaptability. For the data to be comparable across the company, you need a standard tool. But every change project is unique and needs some flexibility. Finding that balance is tricky.
Host: So it's a constant trade-off.
Expert: It is. The other two challenges were more human-centric. They had to create a realistic understanding of what the data could actually represent—quantification isn’t a magic wand for complex social processes. And finally, they had to establish a shared vision for what the tool was for, to avoid confusion or resistance from users.
Host: Which brings us to the most important question, Alex. Why does this matter for business leaders listening today? What are the practical takeaways?
Expert: The biggest takeaway is that you can and should move from guesswork to data-informed decision-making in change management. This study provides a practical blueprint for how to do that. You can get a real pulse on your organization during its most critical moments.
Host: And it seems the lesson is that the tool itself is only half the battle.
Expert: Absolutely. The second key takeaway is that the design *process* is crucial. You have to treat the implementation of a monitoring system as a change project in its own right. That means involving stakeholders from all levels, communicating a clear vision for the tool, and being upfront about its limitations.
Host: You mentioned the importance of balance and trade-offs. How should a leader think about that?
Expert: That’s the third takeaway. Leaders must be willing to make conscious trade-offs. There is no perfect, one-size-fits-all solution. You have to decide what matters most for your organization: Is it ease of use, or is it granular data? Is company-wide standardization more important than project-specific flexibility? This study shows that acknowledging and navigating these trade-offs is central to success.
Host: So, Alex, to sum up, it sounds like while change is difficult, we now have a much clearer path to actually measuring and managing it effectively.
Expert: That's right. These new monitoring systems, combining simple surveys with powerful dashboards, can offer the transparency that leaders have been missing. But success hinges on a thoughtful design process that balances technology with the very human elements of change.
Host: A fantastic insight. Thank you so much for breaking that down for us, Alex.
Expert: My pleasure, Anna.
Host: And thank you to our listeners for tuning in. For A.I.S. Insights — powered by Living Knowledge, I’m Anna Ivy Summers.
Change Management, Monitoring, Action Design Research, Design Science, Industry