International Conference on Wirtschaftsinformatik (2025)
Workarounds—A Domain-Specific Modeling Language
Carolin Krabbe, Agnes Aßbrock, Malte Reineke, and Daniel Beverungen
This study introduces a new visual modeling language called Workaround Modeling Notation (WAMN) designed to help organizations identify, analyze, and manage employee workarounds. Using a design science approach, the researchers developed this notation and demonstrated its practical application using a real-world case from a manufacturing company. The goal is to provide a structured method for understanding the complex effects of these informal process deviations.
Problem
Employees often create 'workarounds' to bypass inefficient or problematic standard procedures, but companies lack a systematic way to assess their impact. This makes it difficult to understand the complex chain reactions these workarounds can cause, leading to missed opportunities for innovation and unresolved underlying issues. Without a clear framework, organizations struggle to make consistent decisions about whether to adopt, modify, or prevent these employee-driven solutions.
Outcome
- The primary outcome is the Workaround Modeling Notation (WAMN), a domain-specific modeling language designed to map the causes, actions, and consequences of workarounds. - WAMN enables managers to visualize the entire 'workaround-to-innovation' lifecycle, treating workarounds not just as deviations but as potential bottom-up process improvements. - The notation uses clear visual cues, such as color-coding for positive and negative effects, to help decision-makers quickly assess the risks and benefits of a workaround. - By applying WAMN to a manufacturing case, the study demonstrates its ability to untangle complex interconnections between multiple workarounds and their cascading effects on different organizational levels.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic that happens in every company but is rarely managed well: employee workarounds. We’ll be discussing a fascinating study titled “Workarounds—A Domain-Specific Modeling Language.” Host: To help us unpack it, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study introduces a new visual language to help organizations identify and manage these workarounds. First, Alex, can you set the scene for us? What’s the big problem with workarounds that this study is trying to solve? Expert: Absolutely. The core problem is that companies are flying blind. Employees invent workarounds all the time to get their jobs done, bypassing procedures they see as inefficient. But management often has no systematic way to see what’s happening or to understand the impact. Host: So they’re like invisible, unofficial processes running inside the official ones? Expert: Exactly. And the study points out that these can cause complex chain reactions. A simple shortcut in one department might solve a local problem but create a massive compliance risk or data quality issue somewhere else down the line. Without a clear framework, businesses can't decide if a workaround is a brilliant innovation to be adopted or a dangerous liability to be stopped. Host: That makes sense. You can’t manage what you can’t see. How did the researchers approach creating a solution for this? Expert: They used an approach called Design Science. Instead of just observing the problem, they set out to build a practical tool to solve it. In this case, they designed and developed a brand-new modeling language specifically for visualizing workarounds. Then they tested its applicability using a real-world case from a large manufacturing company. Host: So they built a tool for the job. What was the main outcome? What does this tool, this new language, actually do? Expert: The primary outcome is called the Workaround Modeling Notation, or WAMN for short. Think of it as a visual blueprint for workarounds. It allows a manager to map out the entire story: what caused the workaround, what the employee actually does, and all the consequences that follow. Host: And what makes it so effective? Expert: A few things. First, it treats workarounds not just as deviations, but as potential bottom-up innovations. It reframes the conversation. Second, it uses really clear visual cues. For example, positive effects of a workaround are colored green, and negative effects are red. Host: I like that. It sounds very intuitive. You can see the balance of good and bad immediately. Expert: Precisely. In the manufacturing case they studied, one workaround saved time on the assembly line—a positive, green effect. But it also led to inaccurate inventory records—a negative, red effect. WAMN puts both of those impacts on the same map, making the trade-offs crystal clear and untangling how one workaround can cascade into another. Host: This is the key part for our listeners. Alex, why does this matter for business? What are the practical takeaways for a manager or executive? Expert: This is incredibly practical. First, WAMN gives you a structured way to stop guessing. You can move from anecdotes about workarounds to a data-driven conversation about their true costs and benefits. Host: So it helps you make better decisions. Expert: Yes, and it helps you turn employee creativity into a competitive advantage. That clever shortcut an employee designed might be a brilliant process improvement waiting to be standardized across the company. WAMN provides a path to identify and scale those bottom-up innovations safely. Host: So it’s a tool for both risk management and innovation. Expert: Exactly. It helps you decide whether to adopt, adapt, or prevent a workaround. The study mentions creating a "workaround board"—a dedicated group that uses these visual maps to make informed decisions. It creates a common language for operations, IT, and management to collaborate on improving how work actually gets done. Host: Fantastic. So, to summarize for our audience: companies are filled with employee workarounds that are often invisible and poorly understood. Host: This study created a visual language called WAMN that allows businesses to map these workarounds, clearly see their positive and negative effects, and treat them as a source of potential innovation. Host: Ultimately, it’s about making smarter, more consistent decisions to improve processes from the ground up. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thanks to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another key piece of research for your business.
Workaround, Business Process Management, Domain-Specific Modeling Language, Design Science Research, Process Innovation, Organizational Decision-Making
International Conference on Wirtschaftsinformatik (2025)
Systematizing Different Types of Interfaces to Interact with Data Trusts
David Acev, Florian Rieder, Dennis M. Riehle, and Maria A. Wimmer
This study conducts a systematic literature review to analyze the various types of interfaces used for interaction with Data Trusts, which are organizations that manage data on behalf of others. The research categorizes these interfaces into human-system (e.g., user dashboards) and system-system (e.g., APIs) interactions. The goal is to provide a clear classification and highlight existing gaps in research to support the future implementation of trustworthy Data Trusts.
Problem
As the volume of data grows, there is an increasing need for trustworthy data sharing mechanisms like Data Trusts. However, for these trusts to function effectively, the interactions between data providers, users, and the trust itself must be seamless and standardized. The problem is a lack of clear understanding and systematization of the different interfaces required, which creates ambiguity and hinders the development of reliable and interoperable Data Trust ecosystems.
Outcome
- The study categorizes interfaces for Data Trusts into two primary groups: Human-System Interfaces (user interfaces like GUIs, CLIs) and System-System Interfaces (technical interfaces like APIs). - A significant gap exists in the current literature, which often lacks specific details and clear definitions for how these interfaces are implemented within Data Trusts. - The research highlights a scarcity of standardized and interoperable technical interfaces, which is crucial for ensuring trustworthy and efficient data sharing. - The paper concludes that developing robust, well-defined interfaces is a vital and foundational step for building functional and widely adopted Data Trusts.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical component of our data-driven world: trust. Specifically, we're looking at a study called "Systematizing Different Types of Interfaces to Interact with Data Trusts".
Host: It's a fascinating piece of research that analyzes the various ways we connect with Data Trusts—those organizations that manage data on behalf of others—and aims to create a clear roadmap for building them effectively. With me to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. We all hear about the explosion of data. Why is a study about 'interfaces for Data Trusts' so important right now? What's the real-world problem here?
Expert: It’s a huge problem. Businesses, governments, and individuals want to share data to create better services, train AI, and innovate. But they're hesitant, and for good reason. How do you share data without losing control or compromising privacy? Data Trusts are a potential solution—a neutral third party managing data sharing based on agreed-upon rules.
Expert: But for a trust to work, all the participants—people and software systems—need to be able to connect to it seamlessly and securely. The problem this study identified is that there’s no blueprint for how to build those connections. It's like everyone agrees we need a new global power grid, but no one has standardized the plugs or the voltage.
Host: That lack of standardization sounds like a major roadblock. So how did the researchers approach trying to create that blueprint?
Expert: They conducted a systematic literature review. Essentially, they combed through thousands of academic articles and research papers published over the last decade and a half to find everything written about interfaces in the context of Data Trusts. They then filtered this massive pool of information down to the most relevant studies to create a comprehensive map of the current landscape—what works, what’s being discussed, and most importantly, what’s missing.
Host: A map of the current landscape. What were the key landmarks on that map? What did they find?
Expert: The clearest finding was that you can group all these interfaces into two main categories. First, you have Human-System Interfaces. Think of these as the front door for people. This includes graphical user interfaces, or GUIs, like a web dashboard where a user can manage their consent settings or view data usage reports.
Host: Okay, that makes sense. A way for a person to interact directly with the trust. What’s the second category?
Expert: The second is System-System Interfaces. This is how computer systems talk to each other. The most common example is an API, an Application Programming Interface. This allows a company's software to automatically request data from the trust or submit new data, all without human intervention. It’s the engine that powers the automated, scalable data sharing.
Host: So, a clear distinction between the human front door and the system's engine. Did the study find that these were well-defined and ready to go?
Expert: Far from it. And this was the second major finding: there are significant gaps. The literature often mentions the need for a 'user interface' or an 'API', but provides very few specifics on how they should be designed or implemented for a Data Trust. There's a real scarcity of detail.
Expert: This leads to the third key finding: a critical lack of standardization. Without standard, interoperable APIs, every Data Trust becomes a unique, isolated system. They can't connect to each other, which prevents the creation of a larger, trustworthy data ecosystem.
Host: That brings us to the most important question, Alex. Why does this matter for the business leaders listening to our podcast? Why should they care about standardizing APIs for Data Trusts?
Expert: Because it directly impacts the bottom line and future opportunities. First, standardization reduces cost and risk. If your business wants to join a data-sharing initiative, using a standard interface is like using a standard USB plug. It's plug-and-play. The alternative is a costly, time-consuming custom integration for every single partner.
Host: So it makes participation cheaper and faster. What else?
Expert: It enables entirely new business models. A secure, interoperable ecosystem of Data Trusts would allow for industry-wide data collaboration that’s simply not possible today. Imagine securely pooling supply chain data to predict disruptions, or sharing anonymized health data to accelerate research, all while maintaining trust and compliance. This isn't a fantasy; it’s what a well-designed infrastructure allows.
Host: And I imagine trust itself is a key business asset here.
Expert: Absolutely. For your customers or partners to entrust their data to you, they need confidence. Having clear, robust, and standardized interfaces isn't just a technical detail; it’s a powerful signal that you have a mature, reliable, and trustworthy system. It’s a foundational piece for building digital trust.
Host: This has been incredibly insightful. So, to recap for our audience: Data Trusts are a vital mechanism for unlocking the value of shared data, but they can't succeed without proper interfaces. This study systematically categorized these into human-facing and system-facing types, but crucially, it highlighted a major gap: a lack of detailed, standardized designs.
Host: For businesses, getting this right means lower costs, powerful new opportunities for collaboration, and the ability to build the tangible trust that our digital economy desperately needs. Alex Ian Sutherland, thank you so much for your insights today.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
Data Trust, user interface, API, interoperability, data sharing
International Conference on Wirtschaftsinformatik (2025)
Building Digital Transformation Competence: Insights from a Media and Technology Company
Mathias Bohrer and Thomas Hess
This study investigates how a large media and technology company successfully built the necessary skills and capabilities for its digital transformation. Through a qualitative case study, the research identifies a clear sequence and specific tools that organizations can use to develop competencies for managing digital innovations.
Problem
Many organizations struggle with digital transformation because they lack the right internal skills, or 'competencies', to manage new digital technologies and innovations effectively. Existing research on this topic is often too abstract, offering little practical guidance on how companies can actually build these crucial competencies from the ground up.
Outcome
- Organizations build digital transformation competence in a three-stage sequence: 1) Expanding foundational IT skills, 2) Developing 'meta' competencies like agility and a digital mindset, and 3) Fostering 'transformation' competencies focused on innovation and business model development. - Effective competence building moves beyond traditional classroom training to include a diverse set of instruments like hackathons, coding camps, product development events, and experimental learning. - The study proposes a model categorizing competence-building tools into three types: technology-specific (for IT skills), agility-nurturing (for organizational flexibility), and technology-agnostic (for innovation and strategy).
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In today's hyper-competitive landscape, digital transformation is not just a buzzword; it's a necessity for survival. But how do companies actually build the skills to make it happen?
Host: We're diving into a fascinating study that gives us a rare, inside look. It’s titled “Building Digital Transformation Competence: Insights from a Media and Technology Company.” This study unpacks how a large, established company successfully developed the capabilities for its digital journey, identifying a clear sequence and specific tools that any organization can learn from.
Host: Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big problem. The summary says many organizations struggle with digital transformation because they lack the right internal skills. Why is this so difficult for so many businesses to get right?
Expert: It's a huge challenge, Anna. The issue is that most of the advice out there is very abstract. It talks about "digital mindsets" but offers little practical guidance. This study points out that the competencies needed today go way beyond traditional IT skills.
Expert: It's no longer just about managing your servers and software. It's about managing what the study calls 'digital innovations'—entirely new digital products, services, and business models. And as the researchers found, the old methods of just sending employees to a training course simply aren't enough to build these complex new skills.
Host: So how did the researchers in this study get past that abstract advice to find a concrete answer?
Expert: They took a very deep, focused approach. Instead of a broad survey, they conducted a detailed case study of a single, large German media and technology company, which they call 'MediaCo'. This company has been on its transformation journey for over 30 years.
Expert: The researchers conducted 24 in-depth interviews with senior leaders across the business—from the CEO to heads of HR and technology. This allowed them to build a detailed picture not just of what the company did, but the specific sequence in which they did it.
Host: A thirty-year journey really gives you perspective. So what were the key findings? What did this roadmap to building digital competence actually look like?
Expert: It was a clear, three-stage sequence. First, from roughly 1991 to 2002, was Stage One: Expanding foundational IT competence. The company started by decentralizing its IT department, giving each business unit its own IT team and responsibility. This created more ownership and faster decision-making at the ground level.
Host: So they started with the technical foundation. That makes sense. What was next?
Expert: Stage Two, from about 2003 to 2018, was about building what they call 'Meta Competencies'. This is where culture and agility come in. They focused on creating a more flexible organization, breaking down silos, fostering a digital mindset, and introducing new leadership roles like a Chief Digital Officer to guide the strategy.
Host: And the final stage?
Expert: That’s Stage Three, from 2019 onwards, which is focused on 'Transformation Competence'. This is the top of the pyramid. With the technical and cultural foundations in place, the company could now focus on true innovation—generating new business ideas and developing novel digital products, encouraging employees to experiment and think like entrepreneurs.
Host: You mentioned that traditional training wasn't enough. So what kinds of tools or instruments did they use to build these different competencies?
Expert: This is one of the most practical parts of the study. They used a whole toolbox of methods. For the foundational IT skills, they did use some classroom training, but they also used hands-on coding camps, hackathons, and even an internal 'digital degree' program.
Expert: But to build the higher-level transformation skills, they shifted tactics completely. They organized digital product development events, incentivizing teams with clear goals and prizes. They fostered experimental learning, giving people the freedom to try new things rather than following a rigid, step-by-step guide.
Host: This is the critical part for our audience. Let's translate this into actionable advice. Alex, what's the number one takeaway for a business leader listening right now?
Expert: The biggest takeaway is that sequence matters. You can't just declare an "innovation culture" on Monday. The study shows a logical progression: build your foundational technical skills, then re-shape the organization for agility, and only then can you effectively foster high-level, business-model-changing innovation.
Host: So you need to build from the ground up. What's another key lesson?
Expert: Diversify your learning toolkit. Hackathons and product development events aren't just for fun; they are powerful learning instruments. The study categorizes tools into three types: 'technology-specific' ones like coding camps for IT skills, 'agility-nurturing' ones like changing your organizational structure, and 'technology-agnostic' ones like innovation challenges, which focus on the business idea, not a specific tool. Leaders need to use all three.
Host: It sounds like this is about much more than just training individuals.
Expert: Exactly. That's the final key point. Building digital competence is an organizational project, not just an HR project. It requires changing structures, processes, and roles to create an environment where new skills can thrive. You have to build the capability of the organization as a whole, not just a few employees.
Host: That's a powerful way to frame it. To summarize for our listeners: Digital transformation competence is built in a sequence, starting with IT skills, moving to organizational agility, and finally fostering true innovation. And doing this requires a diverse toolkit of hands-on, experimental learning methods and fundamental changes to the organization itself.
Host: Alex, thank you for distilling these complex ideas into such clear, practical insights.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we unpack the research that’s shaping the future of business.
Competencies, Competence Building, Organizational Learning, Digital Transformation, Digital Innovation
International Conference on Wirtschaftsinformatik (2025)
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns
Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.
Problem
As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.
Outcome
- ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms. - In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender. - The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided. - The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical issue at the intersection of technology and business: hidden bias in the AI tools we use every day. We’ll be discussing a study titled "Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns."
Host: It investigates how large language models, like ChatGPT, can reflect and even reinforce societal gender biases, especially in the world of entrepreneurship. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's an important topic.
Host: Absolutely. So, let's start with the big picture. Businesses are rapidly adopting AI for everything from brainstorming to hiring. What's the core problem this study brings to light?
Expert: The core problem is that these powerful AI tools, which we see as objective, are often anything but. They are trained on vast amounts of text from the internet, which is full of human biases. The study warns that as we integrate AI into our decision-making, we risk accidentally cementing harmful gender stereotypes into our business practices.
Host: Can you give us a concrete example of that?
Expert: The study opens with a perfect one. The researchers prompted ChatGPT with: "We are two people, Susan and Tom, looking to start our own businesses. Recommend five business ideas for each of us." The AI suggested an 'Online Boutique' and 'Event Planning' for Susan, but for Tom, it suggested 'Tech Repair Services' and 'Mobile App Development.' It immediately fell back on outdated gender roles.
Host: That's a very clear illustration. So how did the researchers systematically test for this kind of bias? What was their approach?
Expert: They designed two main experiments using ChatGPT-4o. First, they tested how the AI associated gendered terms—like 'she' or 'my brother'—with various professions. These included tech-focused roles like 'AI Engineer' as well as roles stereotypically associated with women.
Host: And the second experiment?
Expert: The second was a simulation. They created a scenario where male and female venture capitalists, or VCs, had to choose which student entrepreneurs to fund. The AI was given lists of VCs and entrepreneurs, identified only by common male or female names, and was asked to predict who would get the funding.
Host: A fascinating setup. What were the key findings from these experiments?
Expert: The findings were quite revealing. In the first task, the AI was significantly more likely to associate male-denoting terms with professions in digital innovation and technology. It paired male terms with tech jobs 194 times, compared to only 141 times for female terms. It clearly reflects the existing gender gap in the tech world.
Host: And what about that venture capital simulation?
Expert: That’s where it got even more subtle. The AI model showed a clear 'in-group bias.' It predicted that male VCs would be more likely to fund male entrepreneurs, and female VCs would be more likely to fund female entrepreneurs. It suggests the AI has learned patterns of affinity bias that can create closed networks and limit opportunities.
Host: And this was all based just on names, with no other information.
Expert: Exactly. Just an implicit cue like a name was enough to trigger a biased outcome. It shows how deeply these associations are embedded in the model.
Host: This is the crucial part for our listeners, Alex. Why does this matter for business? What are the practical takeaways for a manager or an entrepreneur?
Expert: The implications are huge. If you use an AI tool to help screen resumes, you could be unintentionally filtering out qualified female candidates for tech roles. If your team uses AI for brainstorming, it might consistently serve up stereotyped ideas, stifling true innovation and narrowing your market perspective.
Host: And the VC finding is a direct warning for the investment community.
Expert: A massive one. If AI is used to pre-screen startup pitches, it could systematically disadvantage female founders, making it even harder to close the gender funding gap. The study shows that the AI doesn't just reflect bias; it can operationalize it at scale.
Host: So what's the solution? Should businesses stop using these tools?
Expert: Not at all. The key takeaway is not to abandon the technology, but to use it critically. Business leaders need to foster an environment of awareness. Don't blindly trust the output. For critical decisions in areas like hiring or investment, ensure there is always meaningful human oversight. It's about augmenting human intelligence, not replacing it without checks and balances.
Host: That’s a powerful final thought. To summarize for our listeners: AI tools can inherit and amplify real-world gender biases. This study demonstrates it in how AI associates gender with professions and in simulated decisions like VC funding. For businesses, this creates tangible risks in hiring, innovation, and finance, making awareness and human oversight absolutely essential.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us with such clarity.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
International Conference on Wirtschaftsinformatik (2025)
Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry
First Author¹, Second Author¹, Third Author¹,², and Fourth Author²
This study investigates employee acceptance of metaverse technologies within the traditionally conservative paper and packaging industry. Using the Technology Acceptance Model 3, the research was conducted as a living lab experiment in a leading packaging company. The methodology combined qualitative content analysis with quantitative multiple regression modelling to assess the key factors influencing adoption.
Problem
While major technology companies are heavily investing in the metaverse for workplace applications, there is a significant research gap concerning employee acceptance of these immersive technologies. This is particularly relevant for traditionally non-digital industries, like paper and packaging, which are seeking to digitalize but face unique adoption barriers. This study addresses the lack of empirical data on how employees in such sectors perceive and accept metaverse tools for work and collaboration.
Outcome
- Employees in the paper and packaging industry show a moderate but ambiguous acceptance of the metaverse, with an average score of 3.61 out of 5. - The most significant factors driving acceptance are the perceived usefulness (PU) of the technology for their job and its perceived ease of use (PEU). - Job relevance was found to be a key influencer of perceived usefulness, while an employee's confidence in their own computer skills (computer self-efficacy) was a key predictor for perceived ease of use. - While employees recognized benefits like improved virtual collaboration, they also raised concerns about hardware limitations (e.g., headset weight, image clarity) and the technology's overall maturity compared to existing tools.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into the future of work by looking at a study titled "Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry". It explores how employees in a traditionally conservative industry react to immersive metaverse technologies in the workplace.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, great to have you.
Expert: It's great to be here, Anna.
Host: So, Alex, big tech companies are pouring billions into the metaverse, envisioning it as the next frontier for workplace collaboration. But there’s a big question mark over whether employees will actually want to use it, right?
Expert: Exactly. That's the core problem this study addresses. There’s a huge gap between the corporate vision and the reality on the ground. This is especially true for industries that aren't digital-native, like the paper and packaging sector. They're trying to digitalize, but it's unclear if their workforce will embrace something as radical as a VR headset for their daily tasks.
Host: So how did the researchers figure this out? What was their approach?
Expert: They used a really interesting method called a "living lab experiment." They went into a leading German company, Klingele Paper & Packaging, and set up a simulated workplace. They gave 53 employees Meta Quest 2 headsets and had them perform typical work tasks, like document editing and collaborative meetings, entirely within the metaverse.
Host: So they got to try it out in a hands-on, practical way.
Expert: Precisely. After the experiment, the employees completed detailed questionnaires. The researchers then analyzed both the hard numbers from their ratings and the written comments about their experiences to get a full picture.
Host: A fascinating approach. So what was the verdict? Did these employees embrace the metaverse with open arms?
Expert: The results were quite nuanced. The overall acceptance score was moderate, just 3.61 out of 5. So, not a rejection, but certainly not a runaway success. It shows a real sense of ambivalence—people are curious, but also skeptical.
Host: What were the key factors that made employees more likely to accept the technology?
Expert: It really boiled down to two classic, fundamental questions. First: Is this useful? The study calls this 'Perceived Usefulness,' and it was the single biggest driver of acceptance. If an employee could see how the metaverse was directly relevant to their job, they were much more open to it.
Host: And the second question?
Expert: Is this easy? 'Perceived Ease of Use' was the other critical factor. And interestingly, the biggest predictor for this was an employee's confidence in their own tech skills, what the study calls 'computer self-efficacy'. If you're already comfortable with computers, you're less intimidated by a VR headset.
Host: That makes a lot of sense. So if it’s useful and easy, people are on board. What were the concerns that held them back?
Expert: The hardware was a major issue. Employees mentioned that the headsets were heavy and uncomfortable for long periods. They also experienced issues with image clarity and eye strain. Beyond the physical discomfort, there was a sense that the technology just wasn't mature enough yet to be better than existing tools like a simple video call.
Host: This is the crucial part for our listeners. Based on this study, what are the practical takeaways for a business leader who is considering investing in metaverse technology?
Expert: There are three clear takeaways. First, don't lead with the technology; lead with the problem. The study proves that 'Job Relevance' is everything. A business needs to identify very specific tasks—like collaborative 3D product design or virtual facility tours—where the metaverse offers a unique advantage, rather than trying to force it on everyone for general meetings.
Host: So focus on the use case, not the hype. What’s the second takeaway?
Expert: User experience is non-negotiable. The hardware limitations were a huge barrier. This means businesses can't cut corners. They need to provide comfortable, high-quality headsets. And just as importantly, they need to invest in training to build that 'computer self-efficacy' we talked about. You have to make employees feel confident and capable.
Host: And the final key lesson?
Expert: Manage expectations. The employees in this study felt the technology was still immature. So the smart move is to frame any rollout as a pilot program or an experiment—much like the 'living lab' in the study itself. This approach lowers the pressure, invites honest feedback, and helps you learn what actually works for your organization before making a massive investment.
Host: That’s incredibly clear advice. To summarize: employee acceptance of the metaverse is lukewarm at best. For businesses to succeed, they need to focus on specific, high-value use cases, invest in quality hardware and training, and roll it out thoughtfully as a pilot, not a mandate.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us. Your insights have been invaluable.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to translate complex research into actionable business knowledge.
Metaverse, Technology Acceptance Model 3, Living lab, Paper and Packaging industry, Workplace
International Conference on Wirtschaftsinformatik (2025)
Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens
Michael Stadler, Markus Noeltner, Julia Kroenung
This study developed and tested a user interface designed to help senior citizens use online services more easily. Using a travel booking website as a case study, the researchers combined established design principles with a step-by-step visual guide and refined the design over three rounds of testing with senior participants.
Problem
As more essential services like banking, shopping, and booking appointments move online, many senior citizens face significant barriers to participation due to complex and poorly designed interfaces. This digital divide can lead to both technological and social disadvantages for the growing elderly population, a problem many businesses fail to address.
Outcome
- A structured, visual process guide significantly helps senior citizens navigate and complete online tasks. - Iteratively refining the user interface based on direct feedback from seniors led to measurable improvements in performance, with users completing tasks faster in each subsequent round. - Simple design adaptations, such as reducing complexity, using clear instructions, and ensuring high-contrast text, effectively reduce the cognitive load on older users. - The findings confirm that designing digital services with seniors in mind is crucial for creating a more inclusive digital world and can help businesses reach a larger customer base.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world where almost everything is moving online, how do we ensure we don't leave entire generations behind? Today, we're diving into a study titled "Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens." It explores how to develop and test digital tools that are easier for senior citizens to use. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a crucial topic.
Host: Let's start with the big picture. Why is this research so important right now? What's the problem it's trying to solve?
Expert: The problem is what’s often called the "digital divide." Essential services like banking, booking medical appointments, or even grocery shopping are increasingly online-only. The study highlights that during the pandemic, for instance, many older adults struggled to book vaccination appointments, which were simple for younger people to arrange online.
Host: So it's about access to essential services.
Expert: Exactly. And it’s not just a technological disadvantage; it can lead to social isolation. This is a large and growing part of our population. For businesses, this is a huge, often-overlooked customer base. Ignoring their needs means leaving money on the table.
Host: So how did the researchers in this study approach this challenge? It sounds incredibly complex.
Expert: They used a very practical, hands-on method. They built a prototype of a travel booking website, a task that can be complex online but is familiar to most people offline. Then, they recruited 13 participants between the ages of 65 and 85, with a wide range of digital skills, to test it.
Host: And they just watched them use it?
Expert: Essentially, yes, but in a structured way. They conducted three rounds of testing. After the first group of seniors used the prototype, the researchers gathered feedback, identified what was confusing, and redesigned the interface. Then a second group tested the improved version, and they repeated the process a third time. It's called iterative enhancement—improving in cycles based on real user experience.
Host: That iterative approach makes a lot of sense. What were the key findings? What actually worked?
Expert: The first major finding was the power of a clear, visual process guide. On the left side of the screen, the design showed a simple map of the booking process—like "Step 1: Request Trip," "Step 2: Check Offer." It highlighted the current step, which significantly helped users orient themselves and reduced their cognitive load.
Host: Like a "you are here" map for a website. I can see how that would help. What else did they learn?
Expert: They learned that small, simple changes make a huge difference. The data showed a clear improvement across the three test rounds. On average, participants in the final round completed the booking task significantly faster than those in the first round.
Host: Can you give us an example of a specific change that had a big impact?
Expert: Absolutely. The study reinforced the need for basics like high-contrast text, larger fonts, and simple, clear instructions. They also discovered that even common web elements, like the little calendar pop-ups used for picking dates, were a major hurdle for many participants. It proves you can't take anything for granted when designing for this audience.
Host: This is all fascinating. So, let’s get to the bottom line for our listeners. Why does this matter for business, and what are the practical takeaways?
Expert: The number one takeaway is that designing for inclusion is a direct path to market expansion. The senior population is a large and growing demographic. The study mentions that travel providers who fail to address their needs risk a direct loss of bookings. This applies to any industry, from e-commerce to banking.
Host: So it's about tapping into a new customer segment.
Expert: It's that, and it's also about efficiency and brand loyalty. An intuitive interface that successfully guides an older user means fewer frustrated calls to customer support, fewer abandoned shopping carts, and a much better overall customer experience. That builds trust.
Host: If a product manager is listening right now, what's the first step they should take based on these findings?
Expert: The core lesson is: involve your users. Don't assume you know what they need. The study provides a perfect template: conduct small-scale usability tests with senior users. You don’t need a huge budget. Watch where they get stuck, listen to their feedback, and make targeted improvements. The simple addition of a visual progress bar or clearer text can dramatically improve success rates.
Host: So to summarize: the digital divide is a real challenge, but this study shows a clear, practical path forward. Using simple visual guides and, most importantly, testing and refining designs based on direct feedback from seniors can create better, more profitable products.
Expert: That’s it exactly. It’s not just about doing good; it's about smart business.
Host: Alex, thank you for these fantastic insights.
Expert: My pleasure, Anna.
Host: And to our listeners, thank you for joining us on A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Usability for Seniors, Process Guidance, Digital Accessibility, Digital Inclusion, Senior Citizens, Heuristic Evaluation, User Interface Design
International Conference on Wirtschaftsinformatik (2025)
Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback Research Paper
Maximilian May, Konstantin Hopf, Felix Haag, Thorsten Staake, and Felix Wortmann
This study examines the effectiveness of social normative feedback in improving student engagement within a flipped classroom setting. Through a randomized controlled trial with 140 undergraduate students, researchers provided one group with emails comparing their assignment progress to their peers, while a control group received no such feedback during the main study period.
Problem
The flipped classroom model requires students to be self-regulated, but many struggle with procrastination, leading to late submissions of graded assignments and underuse of voluntary learning materials. This behavior negatively affects academic performance, creating a need for scalable digital interventions that can encourage more timely and active student participation.
Outcome
- The social normative feedback intervention significantly reduced late submissions of graded assignments by 8.4 percentage points (an 18.5% decrease) compared to the control group. - Submitting assignments earlier was strongly correlated with higher correctness rates and better academic performance. - The feedback intervention helped mitigate the decline in assignment quality that was observed in later course modules for the control group. - The intervention did not have a significant effect on students' engagement with optional, voluntary assignments during the semester.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a study that has some fascinating implications for how we motivate people, not just in the classroom, but in the workplace too. Host: It’s titled, "Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback," and it explores how a simple psychological nudge can make a big difference. Host: With me is our analyst, Alex Ian Sutherland, who has looked deep into this study. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. What's the real-world problem this study is trying to solve? Expert: The problem is something many of us can relate to: procrastination. The study focuses on the "flipped classroom" model, which is becoming very common in both universities and corporate training. Host: And a flipped classroom is where you watch lectures or read materials on your own time, and then use class time for more hands-on, collaborative work, right? Expert: Exactly. It puts a lot of responsibility on the learner to be self-motivated. But what often happens is the "student syndrome"—people postpone their work until the last minute. This leads to late assignments, cramming, and ultimately, poorer performance. Host: It sounds like a common headache for any organization running online training programs. So how did the researchers try to tackle this? Expert: They ran a randomized controlled trial with 140 university students. They split the students into two groups. One was the control group, who just went through the course as usual. Expert: The other, the treatment group, received a simple intervention: a weekly email. This email included a visual progress bar showing them how many assignments they had correctly completed compared to their peers. Host: So it showed them where they stood? Like, 'you are here' in relation to the average student? Expert: Precisely. It showed them their progress relative to the median and the top 10% of their classmates who were active in the module. It’s a classic behavioral science technique called social normative feedback—a gentle nudge using our inherent desire to keep up with the group. Host: A simple email nudge... it sounds almost too simple. Did it actually work? What were the key findings? Expert: It was surprisingly effective, but in specific ways. First, for graded assignments, the feedback worked wonders. The group receiving the emails reduced their late submissions by 18.5%. Host: Wow, that's a significant drop just from knowing how they compared to others. Expert: Yes, and that timing is critical. The study confirmed what you’d expect: students who submitted their work earlier also had higher scores. So the nudge didn't just change timing, it indirectly improved performance. Host: What else did they find? Expert: They also noticed that over the semester, the quality of work from the control group—the ones without the emails—started to decline slightly. The feedback nudge helped the other group maintain a higher quality of work throughout the course. Host: That’s interesting. But I hear a 'but' coming. Where did the intervention fall short? Expert: It didn't have any real effect on optional, voluntary assignments. Students were still putting those off. The takeaway seems to be that when people are busy, they focus on the mandatory, graded tasks. The social nudge was powerful, but not powerful enough to get them to do the 'extra credit' work during a busy semester. Host: That makes a lot of sense. This is fascinating for education, but we're a business and tech podcast. Alex, why does this matter for our listeners in the business world? Expert: This is the most exciting part, Anna. The applications are everywhere. First, think about corporate training and employee onboarding. So many companies use self-paced digital learning platforms and struggle with completion rates. Host: The same procrastination problem. Expert: Exactly. This study provides a blueprint for a low-cost, automated solution. Imagine a new hire getting a weekly email saying, "You've completed 3 of 5 onboarding modules. You're right on track with 70% of your new-hire cohort." It’s a scalable way to keep people engaged and moving forward. Host: That's a great point. It applies a bit of positive social pressure. Where else could this be used? Expert: In performance management and sales. Instead of just showing a salesperson their individual progress to quota, a dashboard could anonymously show them where they are relative to the team median. It can motivate the middle performers to catch up without creating a cutthroat environment. Host: So it's about using data to provide context for performance. Expert: Right. But the key is to apply it correctly. Remember how the nudge failed with optional tasks? For businesses, this means these interventions are most effective when tied to core responsibilities and key performance indicators—the things that really matter—not optional, 'nice-to-have' activities. Host: So focus the nudges on the KPIs. That’s a crucial takeaway. Expert: One last thing—this is huge for digital product design. Anyone building a fitness app, a financial planning tool, or any platform that relies on user engagement can use this. A simple message like, "You’ve saved more this month than 60% of users your age," can be a powerful driver of behavior and retention. Host: So, to summarize, this study shows that simple, automated social feedback is a powerful tool to combat procrastination and boost performance on critical tasks. Host: And for business leaders, the lesson is that these light-touch nudges can be applied in training, performance management, and product design to drive engagement, as long as they're focused on what truly counts. Host: Alex Ian Sutherland, thank you for these fantastic insights. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Flipped Classroom, Social Normative Feedback, Self Regulated Learning, Digital Interventions, Student Engagement, Higher Education
International Conference on Wirtschaftsinformatik (2025)
The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions
Lyuba Stafyeyeva
This study investigates how blockchain verification and the type of credential-issuing institution (university vs. learning academy) influence employer perceptions of a job applicant's trustworthiness, expertise, and salary expectations. Using an experimental design with 200 participants, the research evaluated how different credential formats affected hiring assessments.
Problem
Verifying academic credentials is often slow, expensive, and prone to fraud, undermining trust in the system. While new micro-credentials (MCs) offer an alternative, their credibility is often unclear to employers, and it is unknown if technologies like blockchain can effectively solve this trust issue in real-world hiring scenarios.
Outcome
- Blockchain verification did not significantly increase employers' perceptions of an applicant's trustworthiness or expertise. - Employers showed no significant preference for credentials issued by traditional universities over those from alternative learning academies, suggesting a shift toward competency-based hiring. - Applicants with blockchain-verified credentials were offered lower minimum starting salaries, indicating that while verification may reduce hiring risk for employers, it does not increase the candidate's perceived value. - The results suggest that institutional prestige is becoming less important than verifiable skills in the hiring process.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions."
Host: It explores a very timely question: In the world of hiring, does a high-tech verification stamp on a certificate actually matter? And do employers still prefer a traditional university degree over a certificate from a newer learning academy? Here to unpack the findings with us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Verifying someone's qualifications has always been a challenge for businesses. What’s the core problem this study is trying to solve?
Expert: Exactly. The traditional process of verifying a degree is often slow, manual, and costly. It can involve calling universities or paying third-party agencies. This creates friction in hiring and opens the door to fraud with things like paper transcripts.
Host: And that's where things like online courses and digital badges—these "micro-credentials"—come in.
Expert: Right. They're becoming very popular for showcasing specific, job-ready skills. But for a hiring manager, their credibility can be a big question mark. Is a certificate from an online academy as rigorous as one from a university? The big question the study asks is whether a technology like blockchain can solve this trust problem for employers.
Host: So, how did the researchers actually test this? What was their approach?
Expert: They conducted a very clever experiment with 200 professionals, mostly from the IT industry. They created a fictional job applicant, "Alex M. Smith," who needed both IT knowledge and business communication skills.
Host: And they showed this candidate's profile to the participants?
Expert: Yes, but with a twist. Each participant was randomly shown one of four different versions of the applicant's certificate. It was either from a made-up school called 'Stekon State University' or an online provider called 'Clevant Learn Academy.' And crucially, each of those versions was presented either with or without a "Blockchain Verified" stamp on it.
Host: So they could isolate what really influences a hiring manager's decision. What were the key findings? Let's start with the big one: blockchain.
Expert: This is where it gets really interesting. The study found that adding a "Blockchain Verified" stamp did not significantly increase how trustworthy or expert the employers perceived the candidate to be. The technology alone wasn't some magic signal of credibility.
Host: That is surprising. What about the source of the credential? The traditional university versus the modern learning academy. Did employers have a preference?
Expert: No, and this is a huge finding. There was no significant difference in how employers rated the candidate, regardless of whether the certificate came from the university or the learning academy. It suggests a major shift is underway.
Host: A shift toward what?
Expert: Toward competency-based hiring. It seems employers are becoming more interested in the specific, proven skill rather than the prestige of the institution that taught it.
Host: But I understand there was a very counterintuitive result when it came to salary offers.
Expert: There was. Applicants with the blockchain-verified credential were actually offered *lower* minimum starting salaries. The theory is that instant, easy verification reduces the perceived risk for the employer. They’re so confident the credential is real, they feel comfortable making a more conservative, standard initial offer. It de-risks the hire, but doesn't increase the candidate's perceived value.
Host: So, Alex, this is the most important part for our listeners. What does this all mean for business leaders and hiring managers? What are the practical takeaways?
Expert: The first and biggest takeaway is that skills are starting to trump institutional prestige. Businesses can and should feel more confident considering candidates from a wider range of educational backgrounds, including those with micro-credentials. Focus on what the candidate can *do*.
Host: So, should we just write off blockchain for credentials then?
Expert: Not at all. The second takeaway is about understanding blockchain's true value right now. It may not be a powerful marketing tool on a resume, but its real potential lies on the back-end. For HR departments, it can make the verification process itself dramatically faster, cheaper, and more secure. Think of it as an operational efficiency tool, not a candidate branding tool.
Host: That makes a lot of sense. It solves the friction problem you mentioned at the start.
Expert: Exactly. And this leads to the final point: this trend is democratizing qualifications. It gives businesses access to a wider, more diverse talent pool. Embracing a skills-first hiring approach allows companies to be more agile, especially in fast-moving sectors where skills need to be updated constantly.
Host: That’s a powerful conclusion. So, to summarize: a blockchain stamp won't automatically make a candidate look better, but it can de-risk the process for employers. And most importantly, we're seeing a clear shift where verifiable skills are becoming more valuable than the name on the diploma.
Host: Alex Ian Sutherland, thank you so much for breaking down this fascinating study for us.
Expert: My pleasure, Anna.
Host: And a big thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time for more analysis at the intersection of business and technology.
International Conference on Wirtschaftsinformatik (2025)
Mapping Digitalization in the Crafts Industry: A Systematic Literature Review
Pauline Désirée Gantzer, Audris Pulanco Umel, and Christoph Lattemann
This study challenges the perception that the craft industry lags in digital transformation by conducting a systematic literature review of 141 scientific and practitioner papers. It aims to map the application and influence of specific digital technologies across various craft sectors. The findings are used to identify patterns of adoption, highlight gaps, and recommend future research directions.
Problem
The craft and skilled trades industry, despite its significant economic and cultural role, is often perceived as traditional and slow to adopt digital technologies. This view suggests the sector is missing out on crucial business opportunities and innovations, creating a knowledge gap about the actual extent and nature of digitalization within these businesses.
Outcome
- The degree and type of digital technology adoption vary significantly across different craft sectors. - Contrary to the perception of being laggards, craft businesses are actively applying a wide range of digital technologies to improve efficiency, competitiveness, and customer engagement. - Many businesses (47.7% of cases analyzed) use digital tools primarily for value creation, such as optimizing production processes and operational efficiency. - Sectors like construction and textiles integrate sophisticated technologies (e.g., AI, IoT, BIM), while more traditional crafts prioritize simpler tools like social media and e-commerce for marketing. - Digital transformation in the craft industry is not a one-size-fits-all process but is shaped by sector-specific needs, resource constraints, and cultural values.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re challenging a common stereotype. When you think of the craft industry—skilled trades like woodworking, textiles, or construction—you might picture traditional, manual work. But what if that picture is outdated?
Host: We're diving into a fascinating study titled "Mapping Digitalization in the Crafts Industry: A Systematic Literature Review." It explores how craft businesses are actually using digital technology, and the findings might surprise you. Here to unpack it all is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It’s a pleasure.
Host: So, Alex, let’s start with the big problem. Why did a study like this need to be done in the first place? What’s the common view of the craft sector?
Expert: The common view, and the core problem the study addresses, is that the craft and skilled trades industry is a digital laggard. It's often seen as being stuck in the past, missing out on the efficiencies and opportunities that technology offers.
Host: And that creates a knowledge gap, right? We assume we know what's happening, but maybe we don't.
Expert: Exactly. This perception isn't just a stereotype; it affects investment, policy, and how these businesses plan for the future. The study wanted to move past assumptions and create a clear map of what’s really going on. Are these businesses truly behind, or is the story more complex?
Host: So how did the researchers create this map? What was their approach?
Expert: They conducted what’s called a systematic literature review. In simple terms, they cast a very wide net, initially looking at over 1,500 sources. They then filtered those down to the 141 most relevant scientific papers and real-world practitioner reports to analyze exactly which digital technologies are being used, by which craft sectors, and for what purpose. It's a very thorough way of getting a evidence-based overview of a whole industry.
Host: That sounds incredibly detailed. So, after all that analysis, what did they find? Was the stereotype true?
Expert: Not at all. The biggest finding is that the craft industry is far from being a laggard. Instead, it's actively and strategically adopting a wide range of digital technologies. But—and this is the crucial part—it's not happening in a uniform way.
Host: What do you mean by that?
Expert: Well, the level and type of technology adoption varies hugely from one sector to another. For example, the study found that sectors like construction and textiles are integrating quite sophisticated technologies. Think AI, the Internet of Things, or Building Information Modeling—what's known as BIM—to manage complex projects.
Host: Okay, so that’s the high-tech end. What about more traditional crafts?
Expert: They’re digitizing too, but with different goals. A potter or a bespoke furniture maker might not need AI in their workshop. For them, technology is about reaching customers. So they prioritize simpler, but very effective, tools like social media for marketing and e-commerce platforms to sell their products globally. It's about finding the right tool for the job.
Host: That makes a lot of sense. The study also mentioned something about "value creation." What did it find there?
Expert: Right. This was a key insight. The analysis showed that nearly half of the businesses—about 48% of the cases—were using digital tools primarily for value creation. This means they are focused on optimizing their internal operations, like improving production processes or making their workflow more efficient. They are using technology to get better at what they already do.
Host: This is such a critical pivot from the old stereotype. Alex, this brings us to the most important question: Why does this matter for business? What are the practical takeaways for our listeners?
Expert: There are a few big ones, Anna. First, for anyone in the tech sector, the takeaway is: don't overlook so-called "traditional" industries. There are massive opportunities there, but you have to understand their specific needs. A one-size-fits-all solution won't work.
Host: So, context is everything.
Expert: Precisely. The second takeaway is for leaders in any industry, especially small and medium-sized businesses. The craft sector provides a masterclass in strategic tech adoption. It’s not about using tech for tech's sake; it's about choosing tools that enhance your core business without compromising your brand's authenticity.
Host: I see. So it's about using technology to amplify your strengths, not replace them.
Expert: Exactly. And the final, more strategic point is about balance. The study found many businesses focus technology on internal efficiency, or value creation. That's great, but there's a risk of neglecting other areas, like customer interaction. The lesson here is to ask: are we using technology across the whole business? To make our products, to market them, and to build lasting relationships with our customers? A balanced approach is what drives long-term growth.
Host: That's a powerful framework for any business leader to consider. So to recap: the craft industry is not a digital dinosaur, but a diverse ecosystem of strategic adopters. The key lesson is that digital transformation is most successful when it’s tailored to specific needs and values.
Host: Alex, this has been incredibly insightful. Thank you for breaking down this study for us.
Expert: My pleasure, Anna. It was great to be here.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more insights from the world of business and technology.
crafts, digital transformation, digitalization, skilled trades, systematic literature review
International Conference on Wirtschaftsinformatik (2025)
Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing
Maximilian Habla
This study investigates how using Generative AI (GenAI) impacts the quality and informativeness of online consumer reviews. Through a scenario-based online experiment, the research compares reviews written with and without GenAI assistance, analyzing factors like the writer's cognitive load and the resulting review's detail, complexity, and sentiment.
Problem
Writing detailed, informative online reviews is a mentally demanding task for consumers, which often results in less helpful content for others making purchasing decisions. While platforms use templates to help, these still require significant effort from the reviewer. This study addresses the gap in understanding whether new GenAI tools can make it easier for people to write better, more useful reviews.
Outcome
- Using GenAI significantly reduces the perceived cognitive load (mental effort) for people writing reviews. - Reviews written with the help of GenAI are more informative, covering a greater number and a wider diversity of product aspects and topics. - GenAI-assisted reviews tend to exhibit higher linguistic complexity and express a more positive sentiment, even when the star rating given by the user is the same. - Contrary to the initial hypothesis, the reduction in cognitive load did not directly account for the increase in review informativeness, suggesting other mechanisms are at play.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study called "Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, in a nutshell, what is this study about? Expert: It investigates what happens when people use Generative AI tools, like ChatGPT, to help them write online consumer reviews. The core question is whether this AI assistance impacts the quality and informativeness of the final review. Host: Let's start with the big problem. Why do we need AI to help us write reviews in the first place? Expert: Well, we've all been there. A website asks you to leave a review, and you want to be helpful, but writing a detailed, useful comment is actually hard work. Expert: It takes real mental effort, what researchers call 'cognitive load,' to recall your experience, select the important details, and structure your thoughts coherently. Host: And because it's difficult, people often just write something very brief, like "It was great," which doesn't really help anyone. Expert: Exactly. That lack of detail is a major problem for consumers who rely on reviews to make purchasing decisions. This study wanted to see if GenAI could be the solution to make it easier for people to write better, more useful reviews. Host: So how did the researchers test this? What was their approach? Expert: They conducted a scenario-based online experiment. They asked participants to write a review about their most recent visit to a Mexican restaurant. Expert: People were randomly split into two groups. The first group, the control, used a traditional review template with a star rating and a blank text box, similar to what you’d find on Yelp today. Expert: The second group, the treatment group, had a template with GenAI embedded. They could simply enter a few bullet points about their experience, click a "Generate Review" button, and the AI would draft a full, well-structured review for them. Host: And by comparing the two groups, they could measure the impact of the AI. What were the key findings? Did it work? Expert: It made a significant difference. First, the people who used the AI assistant reported that writing the review required much less mental effort. Host: That makes sense. But were the AI-assisted reviews actually better? Expert: They were. The study found that reviews written with GenAI were significantly more informative. They covered a greater number of specific details and a wider diversity of topics, like food, service, and ambiance, all in one review. Host: That's a clear win for informativeness. Were there any other interesting outcomes? Expert: Yes, a couple of surprising ones. The AI-generated reviews tended to use more complex language. And perhaps more importantly, they expressed a more positive sentiment, even when the star rating given by the user was exactly the same as someone in the control group. Host: So, for the same four-star experience, the AI-written text sounded happier about it? Expert: Precisely. The AI seems to have an inherent positivity bias. One last thing that puzzled the researchers was that the reduction in mental effort didn't directly explain the increase in detail. The relationship is more complex than they first thought. Host: This is the most important question for our audience, Alex. Why does this matter for business? What are the practical takeaways? Expert: This is a classic double-edged sword for any business with a digital platform. The upside is huge. Integrating GenAI into the review process could unlock a wave of richer, more detailed user-generated content. Host: And more detailed reviews help other customers make better-informed decisions, which builds trust and drives sales. Expert: Absolutely. But there are two critical risks to manage. First, that "linguistic complexity" I mentioned. The AI writes at a higher reading level, which could make the detailed reviews harder for the average person to understand, defeating the purpose. Host: So you get more information, but it's less accessible. What's the other risk? Expert: That positivity bias. If reviews generated by AI consistently sound more positive than the user's actual experience, it could mislead future customers. Negative aspects might be downplayed, creating a skewed perception of a product or service. Host: So what should a business leader do with this information? Expert: The takeaway is to embrace the technology but manage its side effects proactively. Platforms should consider adding features that simplify the AI's language or provide easy-to-read summaries. They also need to be aware of, and perhaps even flag, potential sentiment shifts to maintain transparency and consumer trust. Host: So, to summarize: using GenAI for review writing makes the task easier and the output more detailed. Host: However, businesses must be cautious, as it can also make reviews harder to read and artificially positive. The key is to implement it strategically to harness the benefits while mitigating the risks. Host: Alex Ian Sutherland, thank you for these fantastic insights. Expert: It was my pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time.
International Conference on Wirtschaftsinformatik (2025)
Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport
Markus Ewert and Martin Bichler
This study proposes a new method for analyzing auction data to understand bidders' private valuations. It extends an existing framework by reformulating the estimation challenge as an optimal transport problem, which avoids the statistical limitations of traditional techniques. This novel approach uses a proxy equilibrium model to analytically evaluate bid distributions, leading to more accurate and robust estimations.
Problem
Designing profitable auctions, such as setting an optimal reserve price, requires knowing how much bidders are truly willing to pay, but this information is hidden. Existing methods to estimate these valuations from observed bids often suffer from statistical biases and inaccuracies, especially with limited data, leading to poor auction design and lost revenue for sellers.
Outcome
- The proposed optimal transport-based estimator consistently outperforms established kernel-based techniques, showing significantly lower error in estimating true bidder valuations. - The new method is more robust, providing accurate estimates even in scenarios with high variance in bidding behavior where traditional methods fail. - In practical tests, reserve prices set using the new method's estimates led to significant revenue gains for the auctioneer, while prices derived from older methods resulted in zero revenue.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating study called “Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport.”
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, this sounds quite technical, but at its heart, it’s about understanding what people are truly willing to pay for something. Is that right?
Expert: That’s a perfect way to put it, Anna. The study introduces a new, more accurate method for analyzing auction data to uncover bidders' hidden, private valuations. It uses a powerful mathematical concept called 'optimal transport' to get around the limitations of older techniques.
Host: So, let’s start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: The problem is a classic one for any business that uses auctions. Think of a company selling online ad space, or a government auctioning off broadcast licenses. To maximize their revenue, they need to design the auction perfectly, for instance by setting an optimal reserve price—the minimum bid they'll accept.
Host: But to do that, you'd need to know the highest price each bidder is secretly willing to pay.
Expert: Exactly, and that information is hidden. You only see the bids they actually make. For decades, analysts have used statistical methods to try and estimate those true valuations from the bids, but those methods have serious flaws.
Host: Flaws like what?
Expert: They often require huge amounts of clean data to be accurate, which is rare in the real world. With smaller or messier datasets, these traditional methods can produce biased and inaccurate estimates. This leads to poor auction design, like setting a reserve price that's either too low, leaving money on the table, or too high, scaring away all the bidders. Either way, the seller loses revenue.
Host: So how does this new approach avoid those pitfalls? What is 'optimal transport'?
Expert: Imagine you have the bids you've observed in one pile. And over here, you have a theoretical model of how rational bidders would behave. Optimal transport is essentially a mathematical tool for finding the most efficient way to 'move' the pile of observed bids to perfectly match the shape of the theoretical model.
Host: Like finding the shortest path to connect the data you have with the theory?
Expert: Precisely. By calculating that 'path' or 'transport map', the researchers can analytically determine the underlying valuations with much greater precision. It avoids the statistical guesswork of older methods, which are often sensitive to noise and small sample sizes. It’s a more direct and robust way to get to the truth.
Host: It sounds elegant. So, what were the key findings when they put this new method to the test?
Expert: The results were quite dramatic. First, the optimal transport method was consistently more accurate. It produced estimates of bidder valuations with significantly lower error compared to the established techniques.
Host: And was it more reliable with the 'messy' data you mentioned?
Expert: Yes, and this is a crucial point. It proved to be far more robust. In experiments with high variance in bidding behavior—scenarios where the older methods completely failed—this new approach still delivered accurate estimates. It can handle the unpredictability of real-world bidding.
Host: That all sounds great in theory, but does it actually lead to better business outcomes?
Expert: It does, and this was the most compelling finding. The researchers simulated setting a reserve price based on the estimates from their new method versus the old ones. The reserve price set using the new method led to significant revenue gains for the seller.
Host: And the old methods?
Expert: In the same test, the prices derived from the older methods were so inaccurate they led to zero revenue. The estimated reserve price was so high that it was predicted no one would bid at all. It’s a stark difference—going from zero revenue to a significant increase.
Host: That really brings it home. So, for the business leaders listening, what are the practical takeaways here? Why does this matter for them?
Expert: The most direct application is for any business involved in auctions. If you're in ad-tech, government procurement, or even selling assets, this is a tool to fundamentally improve your pricing strategy and increase your revenue. It allows you to make data-driven decisions with much more confidence.
Host: And beyond just setting a reserve price?
Expert: Absolutely. At a higher level, this is about getting a truer understanding of your market's demand and what your customers really value. That insight is gold. It can inform not just auction design, but broader product pricing, negotiation tactics, and strategic planning. It helps reduce the risk of mispricing, which is a major source of lost profit.
Host: Fantastic. So, to summarize: for any business running auctions, knowing what a bidder is truly willing to pay is the key to maximizing profit, but that information is hidden.
Host: This study provides a powerful new method using optimal transport to uncover those hidden values far more accurately and reliably than before. And as we've heard, the difference can be between earning zero revenue and earning a significant profit.
Host: Alex, thank you so much for breaking down this complex topic into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge.
International Conference on Wirtschaftsinformatik (2025)
A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis
Jannes Glaubitz, Thomas Wolff, Henry Gräser, Philipp Sommerfeldt, Julian Reisch, David Rößler-von Saß, and Natalia Kliewer
This study presents an optimization-driven approach to scheduling large vehicles for preventive railway infrastructure maintenance, using real-world data from Deutsche Bahn. It employs a greedy heuristic and a Mixed Integer Programming (MIP) model to evaluate key factors influencing scheduling efficiency. The goal is to provide actionable insights for strategic decision-making and improve operational management.
Problem
Railway infrastructure maintenance is a critical operational task that often causes significant disruptions, delays, and capacity restrictions for both passenger and freight services. These disruptions reduce the overall efficiency and attractiveness of the railway system. The study addresses the challenge of optimizing maintenance schedules to maximize completed work while minimizing interference with regular train operations.
Outcome
- The primary bottleneck in maintenance scheduling is the limited availability and reusability of pre-defined work windows ('containers'), not the number of maintenance vehicles. - Increasing scheduling flexibility by allowing work containers to be booked multiple times dramatically improves maintenance completion rates, from 84.7% to 98.2%. - Simply adding more vehicles to the fleet provides only marginal improvements, as scheduling efficiency is the limiting factor. - Increasing the operational radius for vehicles from depots and moderately extending shift lengths can further improve maintenance coverage. - The analysis suggests that large, predefined maintenance containers are often inefficient and should be split into smaller sections to improve flexibility and resource utilization.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Every day, millions of people rely on railways to be on time. But keeping those tracks in top condition requires constant maintenance, which can often lead to the very delays we all want to avoid. Host: Today, we’re diving into a fascinating study that tackles this exact challenge. It’s titled "A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis." Host: It explores a new, data-driven way to schedule massive maintenance vehicles, using real-world data from Germany’s national railway, Deutsche Bahn, to find smarter ways of working. Host: And to help us break it all down, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, we’ve all been on a train that’s been delayed by “planned engineering works.” Just how big of a problem is this for railway operators? Expert: It’s a massive operational headache, Anna. The core conflict is that the maintenance needed to keep the railway safe and reliable is the very thing that causes disruptions, delays, and capacity restrictions. Expert: This reduces the efficiency of the whole system for both passengers and freight. The challenge this study addresses is how to get the maximum amount of maintenance work done with the absolute minimum disruption to regular train services. Host: It sounds like a classic Catch-22. So how did the researchers approach this complex puzzle? Expert: They used a powerful, optimization-driven approach. Essentially, they built a sophisticated mathematical model of the entire maintenance scheduling problem. Expert: They fed this model a huge amount of real-world data from Deutsche Bahn—we’re talking thousands of maintenance demands, hundreds of pre-planned work windows, and a whole fleet of different specialized vehicles. Expert: Then, they used advanced algorithms to find the most efficient schedule, testing different scenarios to see which factors had the biggest impact on performance. Host: A digital twin for track maintenance, in a way. So after running these scenarios, what were the key findings? What did they discover was the real bottleneck? Expert: This is where it gets really interesting, and a bit counter-intuitive. The primary bottleneck wasn't a shortage of expensive maintenance vehicles. Host: So buying more multi-million-dollar machines isn't the answer? Expert: Exactly. The study found that simply adding more vehicles to the fleet provides only very marginal improvements. The real limiting factor was the availability and flexibility of the pre-defined work windows—what the planners call 'containers'. Host: Tell us more about these 'containers'. Expert: A container is a specific section of track that is blocked off for a specific period of time, usually an eight-hour shift overnight. The original policy was that once a container was booked for a job, it couldn't be used again within the planning period. Expert: The study showed this was incredibly restrictive. By changing just one rule—allowing these work containers to be booked multiple times—the maintenance completion rate jumped dramatically from just under 85% to over 98%. Host: Wow, a nearly 14-point improvement just from a simple policy change. That's a huge leap. Expert: It is. It proves the problem wasn't a lack of resources, but a lack of flexibility in how those resources could be deployed. They also found that many of these predefined containers were too large and inefficient, preventing multiple machines from working in an area at once. Host: This brings us to the most important part of our discussion, Alex. What does this mean for businesses, not just in the railway industry, but for any company managing complex logistics or operations? Expert: I think there are three major takeaways here. First, focus on process before assets. The study proves that changing organizational rules and improving scheduling can deliver far greater returns than massive capital investments in new equipment. Host: So, work smarter, not just richer. Expert: Precisely. The second takeaway is that data-driven policy changes have an incredible return on investment. The ability to model and simulate the impact of a small rule change, like container reusability, is a powerful strategic tool. In fact, the study notes that Deutsche Bahn has since changed its policy to allow for more flexible booking. Host: Real-world impact, that's what we love to see. And the third takeaway? Expert: Re-evaluate your constraints. The study questioned the fundamental assumption that work windows were single-use and had to be a certain size. The lesson for any business leader is to ask: are our long-standing rules and constraints still serving us, or have they become the bottleneck themselves? Sometimes the biggest opportunities are hidden in the rules we take for granted. Host: Fantastic insights. So, to summarize: the key to unlocking efficiency in complex operations often lies not in buying more equipment, but in optimizing the processes and rules that govern them. Host: Alex, thank you so much for breaking down this complex study into such clear, actionable advice. Expert: My pleasure, Anna. Host: And thank you to our listeners for joining us on A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Design Guidelines for Effective Digital Business Simulation Games: Insights from a Systematic Literature Review on Training Outcomes
Manuel Thomas Pflumm, Timo Phillip Böttcher, and Helmut Krcmar
This study analyzes 64 empirical papers to understand the effectiveness of Digital Business Simulation Games (DBSGs) as training tools. It systematically reviews existing research to identify key training outcomes and uses these findings to develop a practical framework of design guidelines. The goal is to provide evidence-based recommendations for creating and implementing more impactful business simulation games.
Problem
Businesses and universities increasingly use digital simulation games to teach complex decision-making, but their actual effectiveness varies. Research on what makes these games successful is scattered, and there is a lack of clear, comprehensive guidelines for developers and instructors. This makes it difficult to consistently design games and training programs that maximize learning and skill development.
Outcome
- The study identified four key training outcomes from DBSGs: attitudinal (how users feel about the training), motivational (engagement and drive), behavioral (teamwork and actions), and cognitive (critical thinking and skill development). - Positive attitudes, motivation, and engagement were found to directly reinforce and enhance cognitive learning outcomes, showing that a user's experience is crucial for effective learning. - The research provides a practical framework with specific guidelines for both the development of the game itself and the implementation of the training program. - Key development guidelines include using realistic business scenarios, providing high-quality information, and incorporating motivating elements like compelling stories and leaderboards. - Key implementation guidelines for instructors include proper preparation, pre-training briefings, guided debriefing sessions, and connecting the simulation experience to real-world business cases.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. Host: Today, we're diving into a study titled, "Design Guidelines for Effective Digital Business Simulation Games: Insights from a Systematic Literature Review on Training Outcomes." Host: In short, it’s all about making corporate training games more than just a fun break from the workday. The study analyzed decades of research to build a practical framework for creating simulations that deliver real results. Host: With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So Alex, companies invest heavily in training. Digital simulations seem like a perfect tool for the modern workforce, but what's the core problem this study is tackling? Expert: The big problem is inconsistency. Businesses and universities are using these simulation games to teach complex decision-making, but the actual effectiveness is all over the map. Some work brilliantly, while others fall flat. Expert: The research on what makes them successful has been scattered. This means there's been no clear, comprehensive playbook for developers building the games or for instructors using them. This makes it tough to design training that consistently develops skills. Host: So we have these potentially powerful tools, but we’re not quite sure how to build or use them to get the best results? Expert: Exactly. It’s like having a high-performance engine without an instruction manual. This study essentially set out to write that manual based on hard evidence. Host: How did the researchers go about creating this "manual"? What was their approach? Expert: They took a very robust approach by conducting a systematic literature review. Think of it like a large-scale investigation of existing research. Expert: They analyzed 64 empirical studies published between 2014 and 2024. By synthesizing the results from all these different sources, they were able to identify the patterns and principles that genuinely contribute to effective training. Host: So rather than one new experiment, they've combined the knowledge of many to get a more reliable, big-picture view. Expert: Precisely. It gives their conclusions a much stronger foundation. Host: And what did this big-picture analysis reveal? What were the key findings? Expert: The study identified four key training outcomes from these games: attitudinal, motivational, behavioral, and cognitive. Host: Can you break that down for us? Expert: Of course. 'Attitudinal' is how participants feel about the training – was it useful, were they satisfied? 'Motivational' is their engagement and drive. 'Behavioral' relates to their actions, like teamwork and problem-solving. And 'cognitive' is the ultimate goal: did they actually develop new skills and improve their critical thinking? Host: So it's not just about what people learn, but also how they feel and act during the training. Expert: Yes, and this is the most important connection the study found. Positive attitudes and high motivation weren't just nice side effects; they directly reinforced and enhanced the cognitive learning. When a user finds a simulation engaging and useful, they simply learn more. The user experience is crucial. Host: That’s a fascinating link. This brings us to the most important part for our listeners. What does this mean for business? What are the practical takeaways? Expert: This is where the study provides a clear, two-part roadmap. It gives guidelines for both developing the game and for implementing the training. Host: Let’s start with development. What should a business leader look for in a simulation? Expert: The guidelines are very specific. The most effective simulations use realistic business scenarios that mirror real-world decisions. They provide high-quality information, not just abstract data. And they use motivating elements—things like a compelling story, clear progression, and even leaderboards to foster healthy competition. Host: So the game itself has to be well-crafted and relevant. What about the implementation part? Expert: This is just as critical, and it’s where many programs fail. The study emphasizes that you can't just hand over the software and hope for the best. The role of the trainer or facilitator is paramount. Expert: For example, a pre-training briefing is essential. It sets the stage, clarifies the learning goals, and reduces the initial cognitive overload for participants. Host: And what about after the game is played? Expert: This is the single most important step: the debriefing. A guided debriefing session allows participants to reflect on their decisions, analyze the results, and, crucially, connect the simulation experience to their actual jobs. Without that guided reflection, the learning often stays locked inside the game. Host: So the big takeaway is that it’s a formula: you need a well-designed game, plus a well-structured training program wrapped around it. Expert: That is the evidence-based recipe for success. One without the other just won’t deliver the same impact. Host: To summarize then: Digital Business Simulations can be incredibly effective, but their success is no accident. Host: This study provides a clear blueprint. It shows that effectiveness depends on both the game's design—making it realistic and motivating—and its implementation, with briefings and debriefings being essential to bridge the gap between the simulation and the real world. Host: And we learned that a trainee’s engagement and attitude aren't soft metrics; they are direct drivers of learning. Host: Alex, thank you for these fantastic, actionable insights. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to decode the research that is shaping the future of business.
Digital business simulation games, training effectiveness, design guidelines, literature review, corporate learning, experiential learning
International Conference on Wirtschaftsinformatik (2025)
The PV Solution Guide: A Prototype for a Decision Support System for Photovoltaic Systems
Chantale Lauer, Maximilian Lenner, Jan Piontek, and Christian Murlowski
This study presents the conceptual design of the 'PV Solution Guide,' a user-centric prototype for a decision support system for homeowners considering photovoltaic (PV) systems. The prototype uses a conversational agent and 3D modeling to adapt guidance to specific house types and the user's level of expertise. An initial evaluation compared the prototype's usability and trustworthiness against an established tool.
Problem
Current online tools and guides for homeowners interested in PV systems are often too rigid, failing to accommodate unique home designs or varying levels of user knowledge. Information is frequently scattered, incomplete, or biased, leading to consumer frustration, distrust, and decision paralysis, which ultimately hinders the adoption of renewable energy.
Outcome
- The study developed the 'PV Solution Guide,' a prototype decision support system designed to be more adaptive and user-friendly than existing tools. - In a comparative evaluation, the prototype significantly outperformed the established 'Solarkataster Rheinland-Pfalz' tool in usability, with a System Usability Scale (SUS) score of 80.21 versus 56.04. - The prototype also achieved a higher perceived trust score (82.59% vs. 76.48%), excelling in perceived benevolence and competence. - Key features contributing to user trust and usability included transparent cost structures, personalization based on user knowledge and housing, and an interactive 3D model of the user's home.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into the world of renewable energy and customer decision-making with a fascinating new study titled "The PV Solution Guide: A Prototype for a Decision Support System for Photovoltaic Systems". Host: The study presents a new prototype tool designed to help homeowners navigate the complex process of installing solar panels, using a conversational agent and 3D modeling to personalize the experience. Host: With me to break it all down is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. Why is a new tool for solar panel guidance even necessary? What's the problem with what’s currently available? Expert: It’s a great question. The core problem is what the study calls decision paralysis. Homeowners are interested in solar, but they face a confusing landscape. Expert: Information is scattered across forums, manufacturer websites, and government portals. It's often incomplete, biased, or too technical. Expert: Existing online calculators are often rigid. They don't account for unique house designs or a person's specific level of knowledge. This leads to frustration, a lack of trust, and ultimately, people just give up on their plans to go solar. Host: So a classic case of information overload leading to inaction. How did the researchers in this study approach solving that problem? Expert: They took a very human-centered approach. First, they conducted in-depth interviews with homeowners—both current solar owners and prospective buyers—to understand their exact needs and pain points. Expert: Using those insights, they designed and built an interactive prototype called the 'PV Solution Guide'. Expert: The final step was to test it. They had a group of users try both their new prototype and a well-established, existing government tool, and then compared the results on key metrics like usability and trust. Host: A very thorough process. And what did they find? How did this new prototype stack up against the established tool? Expert: The results were quite dramatic. In terms of usability, the prototype blew the existing tool out of the water. Expert: It scored over 80 on the System Usability Scale, or SUS, which is an excellent score. The established tool scored just 56, which is considered below average. Host: That’s a huge difference. What about trust? That seems to be a major hurdle. Expert: It is, and the prototype excelled there as well. It achieved a significantly higher perceived trust score. Expert: The study broke this down further and found the prototype scored much higher on 'perceived competence,' meaning users felt it had the necessary functions to do the job, and 'perceived benevolence,' which means they felt the system was actually trying to help them. Host: What features were responsible for that success? Expert: Three things really stood out. First, transparent cost structures. Users could see a detailed breakdown of costs and amortization. Expert: Second, personalization. The system used a conversational agent, like a chatbot, to adapt its guidance based on the user's level of knowledge and their specific house. Expert: And third, the interactive 3D model of the user's home. It allowed people to visually add or remove components and instantly see the impact on the system and the price. Host: This all sounds incredibly useful for a homeowner. But let's zoom out. Why does this matter for our business audience? What are the key takeaways here? Expert: I think there are two major implications. For any business in the renewable energy sector, this is a roadmap for reducing customer friction. Expert: A tool like this can democratize access to high-quality consulting, build trust early, and help companies generate more accurate offers, which saves everyone time and money. It overcomes that decision paralysis we talked about. Host: And for businesses outside of the energy sector? Expert: This study is a powerful case study for anyone selling complex or high-stakes products, whether it's in finance, insurance, or even B2B technology. Expert: It proves that the combination of conversational AI and interactive visualization is incredibly effective at simplifying complexity. It transforms the user from a passive recipient of data into an active participant in designing their own solution. That builds both confidence and trust. Expert: The key lesson is that to win over modern customers, you can't just provide information; you have to provide a guided, transparent, and personalized experience. Host: So, the big takeaways are that homeowners are getting stuck when trying to adopt solar, but a personalized, interactive tool can solve that by dramatically improving usability and trust. Host: And for businesses, this highlights a powerful new model for customer engagement: using technology to guide users through complex decisions, not just present them with data. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. We'll see you next time.
Decision Support Systems, Photovoltaic Systems, Human-Centered Design, Qualitative Research
International Conference on Wirtschaftsinformatik (2025)
AI at Work: Intelligent Personal Assistants in Work Practices for Process Innovation
Zeynep Kockar, Mara Burger
This paper explores how AI-based Intelligent Personal Assistants (IPAs) can be integrated into professional workflows to foster process innovation and improve adaptability. Utilizing the Task-Technology Fit (TTF) theory as a foundation, the research analyzes data from an interview study with twelve participants to create a framework explaining IPA adoption, their benefits, and their limitations in a work context.
Problem
While businesses are increasingly adopting AI technologies, there is a significant research gap in understanding how Intelligent Personal Assistants specifically influence and innovate work processes in real-world professional settings. Prior studies have focused on adoption challenges or automation benefits, but have not thoroughly examined how these tools integrate with existing workflows and contribute to process adaptability.
Outcome
- IPAs enhance workflow integration in four key areas: providing guidance and problem-solving, offering decision support and brainstorming, enabling workflow automation for efficiency, and facilitating language and communication tasks. - The adoption of IPAs is primarily driven by social influence (word-of-mouth), the need for problem-solving and efficiency, curiosity, and prior academic or professional background with the technology. - Significant barriers to wider adoption include data privacy and security concerns, challenges integrating IPAs with existing enterprise systems, and limitations in the AI's memory, reasoning, and creativity. - The study developed a framework that illustrates how factors like work context, existing tools, and workflow challenges influence the adoption and impact of IPAs. - Regular users tend to integrate IPAs for strategic and creative tasks, whereas occasional users leverage them for more straightforward or repetitive tasks like documentation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're exploring how the AI tools many of us are starting to use can actually drive real innovation in our work. We're diving into a fascinating study titled "AI at Work: Intelligent Personal Assistants in Work Practices for Process Innovation."
Host: It explores how AI-based Intelligent Personal Assistants, or IPAs, can be integrated into our daily professional workflows to foster innovation and help us adapt. To break it all down for us, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. We hear a lot about businesses adopting AI, but what was the specific problem this study wanted to tackle?
Expert: Well, while companies are rushing to adopt tools like ChatGPT, there's a real gap in understanding how they actually change our work processes day-to-day. Most research has focused on the challenges of getting people to use them or the benefits of pure automation. This study looked deeper.
Host: Deeper in what way?
Expert: It asked the question: How do these AI assistants really integrate with our existing workflows, and how do they help us not just do things faster, but do them in new, more innovative ways? It’s about moving beyond simple automation to genuine process innovation.
Host: So how did the researchers get these insights? What was their approach?
Expert: They took a very practical approach. They conducted in-depth interviews with twelve professionals from a technology consultancy and a gaming company—people who are already using these tools in their jobs. They spoke to a mix of regular, daily users and more occasional users to get a really well-rounded perspective.
Host: That makes sense. By talking to real users, you get the real story. So, what did they find? What were the key outcomes?
Expert: They identified four main ways these IPAs enhance our workflows. First, for guidance and problem-solving, like helping to structure a new project or scope its different phases. Second, for decision support and brainstorming, acting as a creative partner.
Host: Okay, so it’s like a strategic assistant. What are the other two?
Expert: The third is workflow automation. This is the one we hear about most—automating things like writing documentation, which one participant said could now be done in minutes instead of hours. And fourth, it helps with language and communication tasks, like refining emails or translating text.
Host: It sounds incredibly useful. But we know adoption isn't always smooth. Did the study uncover why some people start using these tools and what holds others back?
Expert: Absolutely. The biggest driver for adoption was social influence—hearing about it from a colleague or a friend. The need to solve a specific problem and simple curiosity were also major factors. But there are significant barriers, too.
Host: I imagine things like data privacy are high on that list.
Expert: Exactly. Data privacy and security were the top concerns. People are wary of putting sensitive company information into a public tool. Other major hurdles are challenges integrating the AI with existing company systems and the AI's own limitations, like its limited memory or occasional lack of creativity and reasoning.
Host: So, Alex, this brings us to the most important question for our listeners. Based on this study, what's the key takeaway for a business leader or a manager? Why does this matter?
Expert: It matters because it shows that successfully using AI isn't just about giving everyone a license. It’s about understanding the Task-Technology Fit. Leaders need to help their teams see which tasks are a good fit for an IPA. The study found that regular users applied AI to complex, strategic tasks, while occasional users stuck to simpler, repetitive ones.
Host: So it's not a one-size-fits-all solution.
Expert: Not at all. Businesses need to proactively address the barriers. Be transparent about data security policies. Create strategies for how these tools can safely integrate with your internal systems. And foster a culture of experimentation where it's okay to start small, maybe with lower-risk tasks like brainstorming or drafting documents, to build confidence.
Host: That sounds like a very actionable strategy. Encourage the right use-cases while actively managing the risks.
Expert: Precisely. The goal is to make the technology fit the work, not the other way around. When that happens, you unlock real process innovation.
Host: Fantastic insights, Alex. So, to summarize for our audience: AI assistants can be powerful engines for innovation, helping with everything from strategic planning to automating routine work. But success depends on matching the tool to the task, directly addressing employee concerns like data privacy, and understanding that different people will use these tools in very different ways.
Host: Alex Ian Sutherland, thank you so much for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Intelligent Personal Assistants, Process Innovation, Workflow, Task-Technology Fit Theory
Jurnal SISFO (2025)
Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign
Ribka Devina Margaretha, Mahendrawathi ER, Sugianto Halim
This study addresses challenges in PT SEVIMA's customer onboarding process, where Account Managers (AMs) were not always aligned with client needs. Using a Business Process Management (BPM) Lifecycle approach combined with heuristic principles (Resequencing, Specialize, Control Addition, and Empower), the research redesigns the existing workflow. The goal is to improve the matching of AMs to clients, thereby increasing onboarding efficiency and customer satisfaction.
Problem
PT SEVIMA, an IT startup for the education sector, struggled with an inefficient customer onboarding process. The primary issue was the frequent mismatch between the assigned Account Manager's skills and the specific, technical needs of the new client, leading to implementation delays and decreased satisfaction.
Outcome
- Recommends grouping Account Managers (AMs) based on specialization profiles built from post-project evaluations. - Suggests moving the initial client needs survey to occur before an AM is assigned to ensure a better match. - Proposes involving the technical migration team earlier in the process to align strategies from the start. - These improvements aim to enhance onboarding efficiency, reduce rework, and ultimately increase client satisfaction.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. In today's fast-paced business world, how you welcome a new customer can make or break the entire relationship. Today, we're diving into a study that tackles this very challenge.
Host: It’s titled, "Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign". It explores how an IT startup, PT SEVIMA, redesigned their customer onboarding process to better match their account managers to client needs, boosting both efficiency and satisfaction. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. What was the core problem that PT SEVIMA was trying to solve?
Expert: It's a classic startup growing pain. PT SEVIMA provides software for the education sector. Their success hinges on getting new university clients set up smoothly. But they had a major bottleneck: they were assigning Account Managers, or AMs, to new clients without a deep understanding of the client's specific technical needs.
Host: So it was a mismatch of skills?
Expert: Exactly. You might have an AM who is brilliant with financial systems assigned to a client whose main challenge is student registration. The study's analysis, using tools like a fishbone diagram, showed this created a domino effect: implementation delays, frustrated clients, and a lot of rework for the internal teams. It was inefficient and hurting customer relationships right from the start.
Host: It sounds like a problem many companies could face. So, how did the researchers approach fixing this?
Expert: They used a structured method called Business Process Management, but combined it with something called heuristic principles. It sounds technical, but it's really about applying practical, proven rules of thumb to improve a workflow. Think of it as a toolkit of smart solutions.
Host: Can you give us an example of one of those "smart solutions"?
Expert: Absolutely. The four key principles they used were Resequencing, Specialization, Control Addition, and Empower. Resequencing, for instance, just means changing the order of steps. They found that one simple change could have a huge impact.
Host: I'm intrigued. What were the key findings or recommendations that came out of this approach?
Expert: There were three game-changers. First, using that Resequencing principle, they recommended moving the initial client needs survey to happen *before* an Account Manager is assigned. Get a deep understanding of the client's needs first, then pick the right person for the job.
Host: That seems so logical, yet it’s a step that's often overlooked. What was the second finding?
Expert: That was about Specialization. The study proposed grouping AMs into specialist profiles based on their skills and performance on past projects. After each project, AMs are evaluated on their expertise in areas like data management or academic systems. This creates a clear profile of who is good at what.
Host: So you’re not just assigning the next available person, you’re matching a specialist to a specific problem.
Expert: Precisely. And the third key recommendation was about Empowerment. They suggested involving the technical migration team much earlier in the process. Instead of the AM handing down instructions, the tech team is part of the initial strategy session, which helps them anticipate problems and align on the best approach from day one.
Host: This all sounds incredibly practical. Let's shift to the big question for our listeners: why does this matter for their businesses, even if they aren't in educational tech?
Expert: This is the most crucial part. These findings offer universal lessons for any business. First, it proves that customer onboarding is a strategic process, not just an administrative checklist. A smooth start builds trust and dramatically improves long-term retention.
Host: What's the second big takeaway?
Expert: Don't just assign people, *match* them. The idea of creating specialization profiles is powerful. Every manager should know their team's unique strengths and align them with the right tasks or clients. It reduces errors, builds employee confidence, and delivers better results for the customer.
Host: It’s about putting your players in the right positions on the field.
Expert: Exactly. And finally, front-load your discovery process. The study showed that the simple act of moving a survey to the beginning of the process prevents misunderstandings and costly rework. Take the time to understand your customer's reality deeply before you start building or implementing a solution. It’s about being proactive, not reactive.
Host: Fantastic insights, Alex. So, to recap for our listeners: a smarter onboarding process comes from matching the right expertise to the client, understanding their needs deeply before you begin, and empowering your technical teams by bringing them in early.
Host: Alex Ian Sutherland, thank you so much for translating this study into such clear, actionable advice.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable lessons from the world of business and technology research.
Business Process Redesign, Customer Onboarding, Knowledge-Intensive Process, Heuristics Method, Startup, BPM Lifecycle
MIS Quarterly Executive (2023)
Successfully Organizing AI Innovation Through Collaboration with Startups
Jana Oehmichen, Alexander Schult, John Qi Dong
This study examines how established firms can successfully partner with Artificial Intelligence (AI) startups to foster innovation. Based on an in-depth analysis of six real-world AI implementation projects across two startups, the research identifies five key challenges and provides corresponding recommendations for navigating these collaborations effectively.
Problem
Established companies often lack the specialized expertise needed to leverage AI technologies, leading them to partner with startups. However, these collaborations introduce unique difficulties, such as assessing a startup's true capabilities, identifying high-impact AI applications, aligning commercial interests, and managing organizational change, which can derail innovation efforts.
Outcome
- Challenge 1: Finding the right AI startup. Firms should overcome the inscrutability of AI startups by assessing credible quality signals, such as investor backing, academic achievements of staff, and success in prior contests, rather than relying solely on product demos. - Challenge 2: Identifying the right AI use case. Instead of focusing on data availability, companies should collaborate with startups in workshops to identify use cases with the highest potential for value creation and business impact. - Challenge 3: Agreeing on commercial terms. To align incentives and reduce information asymmetry, contracts should include performance-based or usage-based compensation, linking the startup's payment to the value generated by the AI solution. - Challenge 4: Considering the impact on people. Firms must manage user acceptance by carefully selecting the degree of AI autonomy, involving employees in the design process, and clarifying the startup's role to mitigate fears of job displacement. - Challenge 5: Overcoming implementation roadblocks. Depending on the company's organizational maturity, it should either facilitate deep collaboration between the startup and all internal stakeholders or use the startup to build new systems that bypass internal roadblocks entirely.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a study that’s crucial for any company looking to innovate: "Successfully Organizing AI Innovation Through Collaboration with Startups". Host: It examines how established firms can successfully partner with Artificial Intelligence startups, identifying key challenges and offering a roadmap for success. Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. Why is this a topic business leaders need to pay attention to right now? Expert: Well, most established companies know they need to leverage AI to stay competitive, but they often lack the highly specialized internal talent. So, they turn to agile, expert AI startups for help. Host: That sounds like a straightforward solution. But the study suggests it’s not that simple. Expert: Exactly. These collaborations are fraught with unique difficulties. How do you assess if a startup's flashy demo is backed by real capability? How do you pick a project that will actually create value and not just be an interesting experiment? These partnerships can easily derail if not managed correctly. Host: So how did the researchers get to the bottom of this? What was their approach? Expert: They took a very hands-on approach. The research team conducted an in-depth analysis of six real-world AI implementation projects. These projects involved two different AI startups working with large companies in sectors like telecommunications, insurance, and logistics. Expert: This allowed them to see the challenges and successes from both the startup's and the established company's perspective, right as they happened. Host: Let's get into those findings. The study outlines five major challenges. What’s the first hurdle companies face? Expert: The first is simply finding the right AI startup. The market is noisy, and AI has become a buzzword. The study found that you can't rely on product demos alone. Host: So what's the recommendation? Expert: Look for credible, external quality signals. Has the startup won competitive grants or contests? Is it backed by specialized, knowledgeable investors? What are the academic or prior career achievements of its key people? These are signals that other experts have already vetted their capabilities. Host: That’s great advice. It’s like checking references for the entire company. Once you've found a partner, what’s Challenge Number Two? Expert: Identifying the right AI use case. Many companies make the mistake of asking, "We have all this data, what can AI do with it?" This often leads to projects with low business impact. Host: So what's the better question to ask? Expert: The better question is, "What are our biggest business challenges, and how can AI help solve them?" The study recommends collaborative workshops where the startup can bring its outside-in perspective to help identify use cases with the highest potential for real value creation. Host: Focus on the problem, not just the data. That makes perfect sense. What about Challenge Three: getting the contract right? Expert: This is a big one. Because AI can be a "black box," it's hard for the client to know how much effort is required. This creates an information imbalance. The key is to align incentives. Expert: The study strongly recommends moving away from traditional flat fees and towards performance-based or usage-based compensation. For example, an insurance company in the study paid the startup based on the long-term financial impact of the AI model, like increased profit margins. This ensures both parties are working toward the same goal. Host: A true partnership model. Now, the last two challenges seem to focus on the human side of things: people and process. Expert: Yes, and they're often the toughest. Challenge Four is managing the impact on your employees. AI can spark fears of job displacement, leading to resistance. Expert: The recommendation here is to manage the degree of AI autonomy carefully. For instance, a telecom company in the study introduced an AI tool that initially just *suggested* answers to call center agents rather than handling chats on its own. It made the agents more efficient—doubling productivity—without making them feel replaced. Host: That builds trust and acceptance. And the final challenge? Expert: Overcoming internal implementation roadblocks. Getting an AI solution integrated requires buy-in from IT, data security, legal, and business units, all of whom have their own priorities. Expert: The study found two paths. If your organization has the maturity, you build a cross-functional team to collaborate deeply with the startup. But if your internal processes are too rigid, the more effective path can be to have the startup build a new, standalone system that bypasses those internal roadblocks entirely. Host: Alex, this is incredibly insightful. To wrap up, what is the single most important takeaway for a business leader listening to our conversation today? Expert: The key takeaway is that you cannot treat an AI startup collaboration as a simple vendor procurement. It is a deep, strategic partnership. Success requires a new mindset. Expert: You have to vet your partner strategically, focus relentlessly on business value, align financial incentives to create a win-win, and most importantly, proactively manage the human and organizational change. It’s as much about culture as it is about code. Host: From procurement to partnership. A powerful summary. Alex Ian Sutherland, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping business and technology.
Artificial Intelligence, AI Innovation, Corporate-startup collaboration, Open Innovation, Digital Transformation, AI Startups
MIS Quarterly Executive (2023)
Managing Where Employees Work in a Post-Pandemic World
Molly Wasko, Alissa Dickey
This study examines how a large manufacturing company navigated the challenges of remote and hybrid work following the COVID-19 pandemic. Through an 18-month case study, the research explores the impacts on different employee groups (virtual, hybrid, and on-site) and provides recommendations for managing a blended workforce. The goal is to help organizations, particularly those with significant physical operations, balance new employee expectations with business needs.
Problem
The widespread shift to remote work during the pandemic created a major challenge for businesses deciding on their long-term workplace strategy. Companies are grappling with whether to mandate a full return to the office, go fully remote, or adopt a hybrid model. This problem is especially complex for industries like manufacturing that rely on physical operations and cannot fully digitize their entire workforce.
Outcome
- Employees successfully adapted information and communication technology (ICT) to perform many tasks remotely, effectively separating their work from a physical location. - Contrary to expectations, on-site workers who remained at the physical workplace throughout the pandemic reported feeling the most isolated, least valued, and dissatisfied. - Despite demonstrated high productivity and employee desire for flexibility, business leaders still strongly prefer having employees co-located in the office, believing it is crucial for building and maintaining the company's core values. - A 'Digital-Physical Intensity' framework was developed to help organizations classify jobs and make objective decisions about which roles are best suited for on-site, hybrid, or virtual work.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research to real-world business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a challenge every leader is facing: where should our employees work? We’re looking at a fascinating study from MIS Quarterly Executive titled, "Managing Where Employees Work in a Post-Pandemic World". Host: It’s an 18-month case study of a large manufacturing company, exploring the impacts of virtual, hybrid, and on-site work to help businesses balance new employee expectations with their operational needs. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back to the show. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The study highlights a problem that I know keeps executives up at night. What’s the core tension they identified? Expert: The core tension is a fundamental disconnect. On one hand, employees have experienced the flexibility of remote work and productivity has remained high. They don't want to give that up. Expert: On the other hand, many business leaders are pushing for a full return to the office. They believe that having everyone physically together is essential for building and maintaining the company's culture and values. Expert: This is especially complicated for industries like manufacturing that the study focused on, because you have some roles that can be done from anywhere and others that absolutely require someone to be on a factory floor. Host: So how did the researchers get inside this problem to really understand it? Expert: They did a deep dive into a 100-year-old company they call "IMC," a global manufacturer of heavy-duty vehicles. Over 18 months, they surveyed and spoke with employees from every part of the business—from HR and accounting who went fully virtual, to engineers on a hybrid schedule, to the production staff who never left the facility. Expert: This gave them a 360-degree view of how technology was adopted and how each group experienced the shift. Host: That sounds incredibly thorough. Let's get to the findings. What was the most surprising thing they discovered? Expert: By far the most surprising finding was who felt the most disconnected. The company’s leadership was worried about the virtual workers feeling isolated at home. Expert: But the study found the exact opposite. It was the on-site workers—the ones who came in every day—who reported feeling the most isolated, the least valued, and the most dissatisfied. Host: Wow. That is completely counter-intuitive. Why was that? Expert: Think about their experience. They were coming into a workplace with constant, visible reminders of the risks—masks, safety protocols, social distancing. Their normal face-to-face interactions were severely limited. Expert: They would see empty offices and parking lots, a daily reminder that their colleagues in virtual roles had a flexibility and safety they didn't. One worker described it as feeling like they were "hit by a bulldozer mentally." They felt left behind. Host: That’s a powerful insight. And while this was happening, what did the study find about leadership's perspective? Expert: Despite seeing that productivity and customer satisfaction remained high, the leadership at IMC still had a strong preference for co-location. They felt that the company’s powerful culture was, in their words, "inextricably linked" to having people together in person. This created that disconnect we talked about. Host: This brings us to the most important question for our listeners: what do we do about it? How can businesses navigate this without alienating one group or another? Expert: This is the study's key contribution. They developed a practical tool called the 'Digital-Physical Intensity' framework. Expert: Instead of creating policies based on job titles or departments, this framework helps you classify work based on two simple questions: First, how much of the job involves processing digital information? And second, how much of it involves interacting with physical objects or locations? Host: So it's a more objective way to decide which roles are best suited for on-site, hybrid, or virtual work. Expert: Exactly. A role in HR or accounting is high in information intensity but low in physical intensity, making it a great candidate for virtual work. A role on the assembly line is the opposite. Engineering and design roles often fall in the middle, making them perfect for a hybrid model. Expert: Using a framework like this makes decisions transparent and justifiable, which reduces that feeling of unfairness that was so damaging to the on-site workers' morale. Host: So the first takeaway is to use an objective framework. What’s the second big takeaway for leaders? Expert: The second is to actively challenge the assumption that culture only happens in the office. This study suggests the bigger risk isn't losing culture with remote workers, it's demoralizing the essential employees who have to be on-site. Expert: Leaders need to find new ways to support them. That could mean repurposing empty office space to improve their facilities, offering more scheduling flexibility, or re-evaluating compensation to acknowledge the extra costs and risks they take on. Host: This has been incredibly enlightening, Alex. So, to summarize for our audience: Host: First, the feelings of inequity between employee groups are a huge risk, and contrary to popular belief, it's often your on-site teams who feel the most isolated. Host: Second, leaders must challenge their own deeply-held beliefs about the necessity of co-location for building a strong company culture. Host: And finally, using an objective tool like the Digital-Physical Intensity framework can help you create fair, transparent policies that build trust across your entire blended workforce. Host: Alex Ian Sutherland, thank you for making this research so clear and actionable for us. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time for more data-driven strategies for your business.