International Conference on Wirtschaftsinformatik (2025)
Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective
Anna Gieß, Sofia Schöbel, and Frederik Möller
This study explores the complex challenges and advantages of integrating Generative Artificial Intelligence (GenAI) into knowledge-based work. Using socio-technical systems theory, the researchers conducted a systematic literature review and qualitative interviews with 18 knowledge workers to identify key points of conflict. The paper proposes solutions like human-in-the-loop models and robust AI governance policies to foster responsible and efficient GenAI usage.
Problem
As organizations rapidly adopt GenAI to boost productivity, they face significant tensions between efficiency, reliability, and data privacy. There is a need to understand these conflicting forces to develop strategies that maximize the benefits of GenAI while mitigating risks related to ethics, data protection, and over-reliance on the technology.
Outcome
- Productivity-Reflection Tension: GenAI increases efficiency but can lead to blind reliance and reduced critical thinking on the content it generates. - Availability-Reliability Contradiction: While GenAI offers constant access to information, its output is not always reliable, increasing the risk of misinformation. - Efficiency-Traceability Dilemma: Content is produced quickly, but the lack of clear source references makes verification difficult in professional settings. - Usefulness-Transparency Tension: The utility of GenAI is limited by a lack of transparency in how it generates outputs, which reduces user trust. - Convenience-Data Protection Tension: GenAI simplifies tasks but creates significant concerns about the privacy and security of sensitive information.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic that’s on every leader’s mind: Generative AI in the workplace. We're looking at a fascinating new study titled "Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective". Host: It explores the complex challenges and advantages of integrating tools like ChatGPT into our daily work, identifying key points of conflict and proposing solutions. Host: And to help us unpack it all, we have our expert analyst, Alex Ian Sutherland. Alex, welcome to the show. Expert: Thanks for having me, Anna. It’s a timely topic. Host: It certainly is. So, let's start with the big picture. What is the core problem this study addresses for businesses? Expert: The core problem is that companies are rushing to adopt Generative AI for its incredible productivity benefits, but they’re hitting roadblocks. They're facing these powerful, conflicting forces—or 'tensions,' as the study calls them—between the need for speed, the demand for reliability, and the absolute necessity of data privacy. Host: Can you give us a real-world example of what that tension looks like? Expert: The study opens with a perfect one. Imagine a manager under pressure to hire someone. They upload all the applicant resumes to ChatGPT and ask it to pick the best candidate. It’s incredibly fast, but they've just ignored company policy and likely violated data privacy laws by uploading sensitive personal data to a public tool. That’s the conflict right there: efficiency versus ethics and security. Host: That’s a very clear, and slightly scary, example. So how did the researchers get to the heart of these issues? What was their approach? Expert: They used a really solid two-part method. First, they did a deep dive into all the existing academic literature on the topic. Then, to ground the theory in reality, they conducted in-depth interviews with 18 knowledge workers—people who are using these AI tools every single day in demanding professional fields. Host: So they combined the academic view with on-the-ground experience. What were some of the key tensions they uncovered from those interviews? Expert: There were five major ones, but a few really stand out for business. The first is what they call the "Productivity-Reflection Tension." Host: That sounds like a classic speed versus quality trade-off. Expert: Exactly. GenAI makes us incredibly efficient. One interviewee noted their use of programmer forums like Stack Overflow dropped by 99% because they could get code faster from an AI. But the major risk is what the study calls 'blind reliance.' We stop thinking critically about the output. Host: We just trust the machine? Expert: Precisely. Another interviewee said, "You’re tempted to simply believe what it says and it’s quite a challenge to really question whether it’s true." This can lead to a decline in critical thinking skills across the team, which is a huge long-term risk. Host: That's a serious concern. You also mentioned reliability. I imagine that connects to the "Efficiency-Traceability Dilemma"? Expert: It does. This is about the black box nature of AI. It gives you an answer, but can you prove where it came from? In professional work, you need verifiable sources. The study found users were incredibly frustrated when the AI would just invent sources or create what they called 'fantasy publications'. For any serious research or reporting, this makes the tool unreliable. Host: And I’m sure that leads us to the tension that keeps CFOs and CTOs up at night: the clash between convenience and data protection. Expert: This is the big one. It's just so easy for an employee to paste a sensitive client email or a draft of a confidential financial report into a public AI to get it proofread or summarized. One person interviewed voiced a huge concern, saying, "I can imagine that many trade secrets simply go to the AI when people have emails rewritten via GPT." Host: So, Alex, this all seems quite daunting for leaders. Based on the study's findings, what are the practical, actionable takeaways for businesses? How do we navigate this? Expert: The study offers very clear solutions, and it’s not about banning the technology. First, organizations need to establish clear AI governance policies. This means defining what tools are approved and, crucially, what types of data can and cannot be entered into them. Host: So, creating a clear rulebook. What else? Expert: Second, implement what the researchers call 'human-in-the-loop' models. AI should be treated as an assistant that produces a first draft, but a human expert must always be responsible for validating, editing, and finalizing the work. This directly counters that risk of blind reliance we talked about. Host: That makes a lot of sense. Human oversight is key. Expert: And finally, invest in critical AI literacy training. Don't just show your employees how to use the tools, teach them how to question the tools. Train them to spot potential biases, to fact-check the outputs, and to understand the fundamental limitations of the technology. Host: So, to sum it up: Generative AI is a powerful engine for productivity, but it comes with these built-in tensions around critical thinking, traceability, and data security. The path forward isn't to stop the car, but to steer it with clear governance, mandatory human oversight, and smarter, better-trained drivers. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Generative AI, Knowledge work, Tensions, Socio-technical systems theory
International Conference on Wirtschaftsinformatik (2025)
Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making
Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.
Problem
When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.
Outcome
- Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance. - This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all. - Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology. - The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into how we can make smarter decisions when using tools like ChatGPT. We’re looking at a fascinating new study titled "Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making." Host: In short, it investigates how to encourage more thoughtful, analytical decision-making when we get help from Generative AI. And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. We all use these new AI tools, and they feel like a massive shortcut. What's the problem this study is trying to solve? Expert: The problem is that we're a bit too quick to trust those shortcuts. The study is based on a concept called Dual Process Theory, which says we have two modes of thinking. There’s ‘System 1’, which is fast, intuitive, and gut-reaction. And there’s ‘System 2’, which is slow, analytical, and deliberate. Host: So, like deciding what to have for lunch versus solving a complex math problem. Expert: Exactly. And when we use Generative AI, we tend to stay in that fast, System 1 mode. We ask a question, get an answer, and accept it without much critical thought. This can lead to suboptimal decisions because we're not truly engaging our analytical brain or questioning the AI's output. Host: That makes sense. We offload the thinking. So how did the researchers in this study try to get people to slow down and actually think? Expert: They ran a clever experiment with 130 participants. They gave them tricky brain teasers—problems that are designed to fool your intuition, like the famous Monty Hall problem. Host: Ah, the one with the three doors and the car! I always get that wrong. Expert: Most people do, initially. The participants were split into three groups. One group got no AI help. A second group got AI assistance concurrently, meaning they could ask ChatGPT for help right away. Host: And the third group? Expert: This was the key. The third group used a 'sequential' approach. They had to submit their own answer to the brain teaser *first*, before they were allowed to see what the AI had to say. Only then could they review the AI's logic and submit a final answer. Host: So they were forced to think for themselves before leaning on the technology. Did this 'think first' approach actually work? What were the key findings? Expert: It worked remarkably well. The group that had to make an initial decision first—the sequential group—had the best performance by a wide margin. Their final decisions were correct about 67% of the time. Host: And how does that compare to the others? Expert: It’s a huge difference. The group with immediate AI help was right only 49% of the time, and the group with no AI at all was correct just 33% of the time. So, thinking first, then consulting the AI, was significantly more effective than either going it alone or using the AI as an immediate crutch. Host: That’s a powerful result. Was there anything else that stood out? Expert: Yes. The 'think first' group also engaged more deeply with the AI. They used more than double the number of AI prompts compared to the group that had concurrent access. It suggests that by forming their own opinion first, they became more curious and critical, using the AI to test their own logic rather than just get a quick answer. Host: This is fascinating, but let's translate it for our audience. Why does this matter for a business leader or a manager? Expert: This is the most crucial part. It has direct implications for how we should design business workflows that involve AI. It tells us that the user interface and the process matter immensely. Host: So it's not just about having the tool, but *how* you use it. Expert: Precisely. For any high-stakes decision—like financial forecasting, market strategy, or even reviewing legal documents—businesses should build in a moment of structured reflection. Instead of letting a team just ask an AI for a strategy, the workflow should require the team to develop their own initial proposal first. Host: You’re describing a kind of "speed bump" for the brain. Expert: It's exactly that. A cognitive nudge. This sequential process forces employees to form an opinion, which makes them more likely to spot discrepancies or weaknesses in the AI’s suggestion. It transforms the AI from a simple answer machine into a true collaborator—a sparring partner that sharpens your own thinking. Host: So this could be a practical way to avoid groupthink and prevent that blind over-reliance on technology we hear so much about. Expert: Yes. It builds a more resilient and critically-minded workforce. By making people think twice, you get better decisions and you train your employees to be more effective partners with AI, not just passive consumers of it. Host: A powerful insight. Let's summarize for our listeners. We often use GenAI with our fast, intuitive brain, which can lead to errors. Host: But this study shows that a simple process change—requiring a person to make their own decision *before* getting AI help—dramatically improves performance. Host: For businesses, this means designing workflows that encourage reflection first, turning AI into a tool that challenges and refines our thinking, rather than replacing it. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making
International Conference on Wirtschaftsinformatik (2025)
Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways
Vincent Paffrath, Manuel Wlcek, and Felix Wortmann
This study investigates the adoption of Generative AI (GenAI) within industrial product companies by identifying key challenges and potential solutions. Based on expert interviews with industry leaders and technology providers, the research categorizes findings into technological, organizational, and environmental dimensions to bridge the gap between expectation and practical implementation.
Problem
While GenAI is transforming many industries, its adoption by industrial product companies is particularly difficult. Unlike software firms, these companies often lack deep digital expertise, are burdened by legacy systems, and must integrate new technologies into complex hardware and service environments, making it hard to realize GenAI's full potential.
Outcome
- Technological challenges like AI model 'hallucinations' and inconsistent results are best managed through enterprise grounding (using company data to improve accuracy) and standardized testing procedures. - Organizational hurdles include the difficulty of calculating ROI and managing unrealistic expectations. The study suggests focusing on simple, non-financial KPIs (like user adoption and time saved) and providing realistic employee training to demystify the technology. - Environmental risks such as vendor lock-in and complex new regulations can be mitigated by creating model-agnostic systems that allow switching between providers and establishing standardized compliance frameworks for all AI use cases.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into the world of manufacturing and heavy industry, a sector that's grappling with one of the biggest technological shifts of our time: Generative AI. Host: We're exploring a new study titled, "Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways." Host: In short, it investigates how companies that make physical products are navigating the hype and hurdles of GenAI, based on interviews with leaders on the front lines. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Great to be here, Anna. Host: So, Alex, we hear about GenAI transforming everything from marketing to software development. Why is it a particularly tough challenge for industrial companies? What's the big problem here? Expert: It’s a great question. Unlike a software firm, an industrial product company can't just plug in a chatbot and call it a day. The study points out that these companies operate in a complex world of hardware, legacy systems, and strict regulations. Expert: Think about a car manufacturer or an energy provider. An AI error isn't just a typo; it could be a safety risk or a massive product failure. They're trying to integrate this brand-new, fast-moving technology into an environment that is, by necessity, cautious and methodical. Host: That makes sense. The stakes are much higher when physical products and safety are involved. So how did the researchers get to the bottom of these specific challenges? Expert: They went straight to the source. The study is built on 22 in-depth interviews with executives and managers from leading industrial companies—think advanced manufacturing, automotive, and robotics—as well as the tech providers who supply the AI. Expert: This dual perspective allowed them to see both sides of the coin: the challenges the industrial firms face, and the solutions the tech experts are building. They then structured these findings across three key areas: technology, organization, and the external environment. Host: A very thorough approach. Let’s get into those findings. Starting with the technology itself, we all hear about AI models 'hallucinating' or making things up. How do industrial firms handle that risk? Expert: This was a major focus. The study found that the most effective countermeasure is something called 'Enterprise Grounding.' Instead of letting the AI pull answers from the vast, unreliable internet, companies are grounding it in their own internal data—engineering specs, maintenance logs, quality reports. Expert: One technique mentioned is Retrieval-Augmented Generation, or RAG. It essentially forces the AI to check its facts against a trusted company knowledge base before it gives an answer, dramatically improving accuracy and reducing those dangerous hallucinations. Host: So it's about giving the AI a very specific, high-quality library to read from. What about the challenges inside the company—the people and the processes? Expert: This is where it gets really interesting. The biggest organizational hurdle wasn't the tech, but the finances and the expectations. It's incredibly difficult to calculate a clear Return on Investment, or ROI, for GenAI. Expert: To solve this, the study found leading companies are ditching complex financial models. Instead, they’re using a 'Minimum Viable KPI Set'—just two simple metrics for every project: First, Adoption, which asks 'Are people actually using it?' and second, Performance, which asks 'Is it saving time or resources?' Host: That sounds much more practical. And what about managing expectations? The hype is enormous. Expert: Exactly. The study calls this the 'Hopium' effect. High initial hopes lead to disappointment and then users abandon the tool. One firm reported that 80% of its initial GenAI licenses went unused for this very reason. Expert: The solution is straightforward but crucial: demystify the technology. Companies are creating realistic employee training programs that show not only what GenAI can do, but also what it *can't* do. It fosters a culture of smart experimentation rather than blind optimism. Host: That’s a powerful lesson. Finally, what about the external environment? Things like competitors, partners, and new laws. Expert: The two big risks here are vendor lock-in and regulation. Companies are worried about becoming totally dependent on a single AI provider. Expert: The key strategy to mitigate this is building a 'model-agnostic architecture'. It means designing your systems so you can easily swap one AI model for another from a different provider, depending on cost, performance, or new capabilities. It keeps you flexible and in control. Host: This is all incredibly insightful. Alex, if you had to boil this down for a business leader listening right now, what are the top takeaways from this study? Expert: I'd say there are three critical takeaways. First, ground your AI. Don't let it run wild. Anchor it in your own trusted, high-quality company data to ensure it's reliable and accurate for your specific needs. Expert: Second, measure what matters. Forget perfect ROI for now. Focus on simple metrics like user adoption and time saved to prove value and build momentum for your AI initiatives. Expert: And third, stay agile. The AI world is changing by the quarter, not the year. A model-agnostic architecture is your best defense against getting locked into one vendor and ensures you can always use the best tool for the job. Host: Ground your AI, measure what matters, and stay agile. Fantastic advice. That brings us to the end of our time. Alex, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And to our audience, thank you for tuning into A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
GenAI, AI Adoption, Industrial Product Companies, AI in Manufacturing, Digital Transformation
International Conference on Wirtschaftsinformatik (2025)
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams
Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.
Problem
While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.
Outcome
- The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams. - GenAI usage has a direct positive impact on overall team performance. - The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone. - The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams".
Host: It explores how tools we're all hearing about, like ChatGPT and GitHub Copilot, are changing the game for software development teams. Specifically, it looks at how these tools affect the way teams share and use knowledge to get work done. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: Alex, we all know GenAI tools can make individuals more productive. But this study looks at the bigger picture, right? The team level. What’s the core problem they're trying to solve here?
Expert: Exactly. While we see headlines about individual productivity skyrocketing, there's a big question mark over what happens when you put these tools into a collaborative team environment. The concern is that businesses are adopting this tech without fully understanding the team-level impacts.
Host: What kind of impacts are we talking about?
Expert: Well, the study points to some serious potential risks. Things like the erosion of unique human expertise, reduced knowledge retention within the team, or even impaired decision-making. Just because an individual can write code faster doesn't automatically mean the team as a whole becomes more innovative or performs better. There was a real gap in our understanding of that connection.
Host: So, how did the researchers investigate this? What was their approach?
Expert: They conducted an empirical study with 80 software developers who are active, regular users of Generative AI in their jobs. They used a structured survey to measure how the use of these tools influenced two key areas: first, "knowledge transfer," which is basically sharing information and expertise, and second, "knowledge application," which is the team's ability to actually use that knowledge to solve new problems. Then they linked those factors to overall team performance.
Host: A direct look at the people on the front lines. So, what were the key findings? What did the data reveal?
Expert: The results were quite clear on a few things. First, using GenAI tools significantly boosts both knowledge transfer and knowledge application. Teams found it easier to share information and easier to put that information to work.
Host: Okay, so it helps on both fronts. Did one matter more than the other when it came to the team’s actual success?
Expert: That's the most interesting part. Yes, one mattered much more. The study found that the biggest driver of improved team performance was knowledge *application*. Just sharing information more efficiently wasn't the magic bullet. The real value came when teams used the AI to help them apply knowledge and actively solve problems.
Host: So it’s not about having the answers, it's about using them. That makes sense. Let's get to the bottom line, Alex. What does this mean for business leaders, for the managers listening to our show?
Expert: This is the crucial takeaway. It's not enough to just give your teams a subscription to an AI tool and expect results. The focus needs to be on integration. Leaders should be asking: How can we create an environment where these tools help our teams *apply* knowledge? This means fostering a culture of active problem-solving and experimentation, using AI as a collaborator.
Host: So, it’s a tool to be wielded, not a replacement for team thinking.
Expert: Precisely. The study emphasizes that GenAI should complement human expertise, not replace it. Over-reliance can be dangerous and may reduce the interpersonal learning that’s so critical for innovation. The goal is balanced usage, where AI handles routine tasks, freeing up humans to focus on complex, collaborative problem-solving. Think of GenAI as a catalyst, but your team is still the engine.
Host: That’s a powerful distinction. So, to recap: this research shows that GenAI can be a fantastic asset for teams, boosting performance by helping them not just share information, but more importantly, *apply* it effectively. The key, however, is thoughtful integration—using AI to augment human collaboration, not automate it away.
Host: Alex, thank you for breaking that down for us with such clarity.
Expert: My pleasure, Anna.
Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
International Conference on Wirtschaftsinformatik (2025)
Configurations of Digital Choice Environments: Shaping Awareness of the Impact of Context on Choices
Phillip Oliver Gottschewski-Meyer, Fabian Lang, Paul-Ferdinand Steuck, Marco DiMaria, Thorsten Schoormann, and Ralf Knackstedt
This study investigates how the layout and components of digital environments, like e-commerce websites, influence consumer choices. Through an online experiment in a fictional store with 421 participants, researchers tested how the presence and placement of website elements, such as a chatbot, interact with marketing nudges like 'bestseller' tags.
Problem
Businesses often use 'nudges' like bestseller tags to steer customer choices, but little is known about how the overall website design affects the success of these nudges. It's unclear if other website components, such as chatbots, can interfere with or enhance these marketing interventions, leading to unpredictable consumer behavior and potentially ineffective strategies.
Outcome
- The mere presence of a website component, like a chatbot, significantly alters user product choices. In the study, adding a chatbot doubled the odds of participants selecting a specific product. - The position of a component matters. Placing a chatbot on the right side of the screen led to different product choices compared to placing it on the left. - The chatbot's presence did not weaken the effect of a 'bestseller' nudge. Instead, the layout component (chatbot) and the nudge (bestseller tag) influenced user choice independently of each other. - Website design directly influences user decisions. Even simple factors like the presence and placement of elements can bias user selections, separate from intentional marketing interventions.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research with real-world business strategy, all powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Configurations of Digital Choice Environments: Shaping Awareness of the Impact of Context on Choices". Host: In short, it’s all about how the layout of your website—things you might not even think about—can dramatically influence what your customers buy. With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses spend a lot of time and money on things like 'bestseller' tags or 'limited stock' warnings to nudge customers. What's the problem this study set out to solve? Expert: The problem is that businesses often treat those nudges as if they exist in a vacuum. They add a 'bestseller' tag and expect a certain result. But they don't account for the rest of the webpage. Expert: The researchers wanted to know how other common website elements, like a simple chatbot window, might interfere with or even change the effectiveness of those marketing nudges. It’s a huge blind spot for companies, leading to unpredictable results. Host: So they’re looking at the entire digital environment, not just one element. How did they test this? Expert: They ran a clever online experiment with over 400 participants in a fictional e-commerce store that sold headphones. Expert: They created six different versions of the product page. Some had no chatbot, some had a chatbot on the left, and others had it on the right. They also tested these layouts with and without a 'bestseller' tag on one of the products. Expert: This allowed them to precisely measure how the presence and the position of the chatbot influenced which pair of headphones people chose, both with and without the marketing nudge. Host: A very controlled setup. So, what did they find? Were there any surprises? Expert: Absolutely. The findings were quite striking. First, just having a chatbot on the page significantly altered user choices. Expert: In fact, the data showed that the mere presence of the chatbot doubled the odds of participants selecting one particular product over others. Host: Wow, doubled the odds? Just by being there? What about its location? Expert: That mattered, too. Placing the chatbot on the right side of the screen led to a different pattern of product choices compared to placing it on the left. Expert: For example, a right-sided chatbot made users more likely to choose the bottom-left product, while a left-sided chatbot drew attention to the top-center product. The layout itself was directing user behavior. Host: So the chatbot had its own powerful effect. But did it interfere with the 'bestseller' tag they were also testing? Expert: That's the most interesting part. It didn't. The chatbot's presence didn't weaken the effect of the bestseller nudge. Expert: The two things—the layout component and the marketing nudge—influenced the customer's choice independently. It’s not one or the other; they both work on the user at the same time, but separately. Host: This feels incredibly important for anyone running an online business. Let's get to the bottom line: why does this matter? What should a business leader or a web designer take away from this? Expert: The number one takeaway is that you have to think about your website holistically. When you add a new feature, you're not just adding a button or a window; you're reconfiguring the entire customer choice environment. Host: So every single element plays a role in the final decision. Expert: Exactly. And that leads to the second key takeaway: test everything. This study proves that a simple change, like moving a component from left to right, can have a measurable impact on sales and user behavior. These aren't just design choices; they are strategic business decisions. Host: It sounds like businesses might be influencing customers in ways they don't even realize. Expert: That's the final point. Your website design is already nudging users, whether you intend it to or not. A chatbot isn't just a support tool; it's a powerful visual cue that biases user selection. Businesses need to be aware of these subtle, built-in influences and manage them intentionally. Host: A powerful reminder that in the digital world, nothing is truly neutral. Let's recap. Host: The layout of your website is actively shaping customer choices. Seemingly functional elements like chatbots have their own significant impact, and their placement matters immensely. These elements act independently of your marketing nudges, meaning you have multiple tools influencing behavior at once. Host: The core lesson is to view your website as a complete, interconnected system and to be deliberate and test every single change. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And to our listeners, thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more research that’s shaping the future of business.
Digital choice environments, digital interventions, configuration, nudging, e-commerce, user interface design, consumer behavior
International Conference on Wirtschaftsinformatik (2025)
Digital Detox: Understanding Knowledge Workers' Motivators and Requirements for Technostress Relief
Marie Langer, Milad Mirbabaie, Chiara Renna
This study investigates how knowledge workers use "digital detox" to manage technology-related stress, known as technostress. Through 16 semi-structured interviews, the research explores the motivations for and requirements of practicing digital detox in a professional environment, understanding it as a coping behavior that enables psychological detachment from work.
Problem
In the modern digital workplace, constant connectivity through information and communication technologies (ICT) frequently causes technostress, which negatively affects employee well-being and productivity. While the concept of digital detox is becoming more popular, there is a significant research gap regarding why knowledge workers adopt it and what individual or organizational support they need to do so effectively.
Outcome
- The primary motivators for knowledge workers to engage in digital detox are the desires to improve work performance by minimizing distractions and to enhance personal well-being by mentally disconnecting from work. - Key drivers of technostress that a digital detox addresses are 'techno-overload' (the increased pace and volume of work) and 'techno-invasion' (the blurring of boundaries between work and private life). - Effective implementation of digital detox requires both individual responsibility (e.g., self-control, transparent communication about availability) and organizational support (e.g., creating clear policies, fostering a supportive culture). - Digital detox serves as both a reactive and proactive coping strategy for technostress, but its success is highly dependent on supportive social norms and organizational adjustments.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re tackling a feeling many of us know all too well: the digital drain. We'll be looking at a study titled "Digital Detox: Understanding Knowledge Workers' Motivators and Requirements for Technostress Relief." Host: It investigates how professionals use digital detox to manage technology-related stress, exploring why they do it and what support they need to succeed. Here to unpack it all is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. We all feel that pressure from constant emails and notifications. But this study frames it as a serious business problem, doesn't it? Expert: Absolutely. The term the research uses is "technostress." It's the negative impact on our well-being and productivity caused by constant connectivity. The study points out that this isn't just an annoyance; it leads to concrete problems like cognitive overload, exhaustion, burnout, and ultimately, poor performance and higher employee turnover. Host: So it directly hits both the employee's well-being and the company's bottom line. How did the researchers investigate this? Expert: They went straight to the source. The study was based on in-depth, semi-structured interviews with 16 knowledge workers who had direct experience trying to implement a digital detox. This qualitative method allowed them to really understand the personal motivations and challenges involved. Host: And what did those interviews reveal? What were the key findings? Expert: The study found two primary motivators for employees. The first is a desire to improve work performance. People are actively trying to minimize distractions to do better, more focused work. One interviewee mentioned that a simple pop-up message could derail a task that should take 10 minutes and turn it into an hour-long distraction. Host: That’s incredibly relatable. Better focus means better work. What was the second motivator? Expert: The second driver was enhancing personal well-being. This is all about the need to psychologically detach and mentally switch off from work. The study specifically identifies two key stressors that a detox helps with. The first is 'techno-overload' – the sheer volume and pace of digital work. Host: The feeling of being buried in information. Expert: Exactly. And the second is 'techno-invasion,' which is that blurring of boundaries where work constantly spills into our private lives, often through our smartphones. Host: So, it's about reclaiming both focus at work and personal time after work. But the study suggests employees can’t really do this on their own, right? Expert: That's one of the most important findings. Effective digital detox requires a partnership. It needs individual responsibility, like self-control and being transparent about your availability, but the research is clear that these efforts can fail without strong organizational support. Host: This brings us to the most crucial part for our listeners. What are the practical takeaways for business leaders? How can organizations provide that support? Expert: The study emphasizes that leaders can't treat this as just an employee's personal problem. They must actively create a supportive culture. This can mean establishing clear policies on after-hours communication, introducing "meeting-free" days to allow for deep work, or encouraging teams to openly discuss and agree on their communication norms. Host: So company culture is the key. Expert: It's fundamental. The research points out that if a manager is sending emails at 10 PM, it creates an implicit expectation of availability that undermines any individual's attempt to detox. The social norms within a team are incredibly powerful. It’s not about banning technology, but managing it with clear rules and expectations. Host: It sounds like it's about making technology work for the company, not the other way around. Expert: Precisely. The goal isn't to escape technology, but to use digital detox as a proactive strategy. When done right, it boosts both productivity and employee well-being, which are two sides of the same coin for any successful business. Host: So, to summarize: Technostress is a real threat to both performance and people. A digital detox is a powerful coping strategy, but it requires a partnership between motivated employees and a supportive organization that sets clear boundaries and fosters a healthy digital culture. Host: Alex Ian Sutherland, thank you for making this complex topic so clear. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge.
Digital Detox, Technostress, Knowledge Worker, ICT, Psychological Detachment, Work-Life Balance
International Conference on Wirtschaftsinformatik (2025)
Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective
Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.
Problem
As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.
Outcome
- Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs. - Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority. - The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust. - Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In a world where artificial intelligence is becoming a key player in corporate decision-making, who is truly responsible when things go wrong? Today we're diving into a fascinating new study titled "Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective."
Host: It investigates how responsibility is understood and assigned when AI systems influence our choices, and how human oversight and even our emotional engagement with technology can shape accountability. Here to break it all down for us is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, let's start with the core issue this study addresses: the 'responsibility gap'. It sounds important, but what does it mean in the real world for businesses?
Expert: It's one of the biggest challenges facing organizations today. As AI becomes more autonomous in fields from finance to healthcare, it gets incredibly difficult to pinpoint who is accountable for a bad outcome. Is it the developer who wrote the code? The manager who used the AI's recommendation? The company that deployed it? Responsibility gets diffused across so many people and systems that it can feel like no one is truly in charge.
Host: A 'many-hands' problem, as the researchers call it. It sounds like a legal and ethical minefield. So, how did the study approach this complex topic?
Expert: They went straight to the source. The researchers conducted in-depth interviews with twenty professionals across various sectors—automotive, healthcare, IT—people who are actively working with AI systems every day. They wanted to understand the real-world experiences and feelings of those on the front lines of this technological shift.
Host: So, based on those real-world conversations, what did they find? I think many assume that AI might reduce our sense of responsibility, letting us off the hook.
Expert: That's the common assumption, but the study found the exact opposite. Far from diminishing responsibility, using AI actually seems to intensify it. Professionals reported a greater awareness of the need to validate and interpret AI outputs. They know they can't just say, "The AI told me to do it." Their personal accountability actually grows.
Host: That's counterintuitive. So if the AI isn't the one in charge, how do these professionals view its role in their work?
Expert: Most see AI as a supportive tool, not an autonomous boss. A recurring image from the interviews was that of a 'sparring partner' or a 'second opinion'. It’s a powerful assistant for analyzing data or generating ideas, but the final authority, the final decision, always rests with the human user.
Host: And what about the 'black box' nature of some AI? The fact that we don't always know how it reaches its conclusions. Does that lead to people trusting it blindly?
Expert: No, and this was another surprising finding. That very uncertainty often encourages users to be more cautious and critical. The study found that because professionals understand the potential for AI errors and don't always see the logic, it spurs them to double-check the results. This critical mindset actually helps to bridge the responsibility gap, rather than widen it.
Host: This is incredibly insightful. So, Alex, let's get to the most important question for our audience. What are the key business takeaways here? What should a leader listening right now do with this information?
Expert: There are three critical takeaways. First, you cannot use AI as a scapegoat. The study makes it clear that responsibility remains anchored in human oversight. Leaders must build a culture where employees are expected and empowered to question, verify, and even override AI suggestions.
Host: Okay, so accountability culture is number one. What’s next?
Expert: Second, define roles with absolute clarity. Your teams need to understand the AI's function. Is it an analyst, an advisor, a co-pilot? The 'sparring partner' model seems to be a very effective framework. Make it clear that while the tool is powerful, the final judgment—and the responsibility that comes with it—belongs to your people.
Host: That makes sense. And the third takeaway?
Expert: Finally, rethink your AI training. It’s not just about teaching people which buttons to press. The real need is to develop critical thinking skills for a hybrid human-AI environment. The study suggests that employees need to be more aware of their own feelings—like over-trust or skepticism—towards the AI and use that awareness to make better judgments.
Host: So, to summarize: AI doesn't erase responsibility, it heightens it. We should treat it as a 'sparring partner', not a boss. And its very opaqueness can be a strength if it encourages a more critical, human-in-the-loop approach.
Expert: Exactly. It's about augmenting human intelligence, not replacing human accountability.
Host: Alex Ian Sutherland, thank you so much for these powerful insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
International Conference on Wirtschaftsinformatik (2025)
Actor-Value Constellations in Circular Ecosystems
Linda Sagnier Eckert, Marcel Fassnacht, Daniel Heinz, Sebastian Alamo Alonso and Gerhard Satzger
This study analyzes 48 real-world examples of circular economies to understand how different companies and organizations collaborate to create sustainable value. Using e³-value modeling, the researchers identified common patterns of interaction, creating a framework of eight distinct business constellations. This research provides a practical guide for organizations aiming to transition to a circular economy.
Problem
While the circular economy offers a promising alternative to traditional 'take-make-dispose' models, there is a lack of clear understanding of how the various actors within these systems (like producers, consumers, and recyclers) should interact and exchange value. This ambiguity makes it difficult for businesses to effectively design and implement circular strategies, leading to missed opportunities and inefficiencies.
Outcome
- The study identified eight recurring patterns, or 'constellations,' of collaboration in circular ecosystems, providing clear models for how businesses can work together. - These constellations are grouped into three main dimensions: 1) innovation driven by producers, services, or regulations; 2) optimizing resource efficiency through sharing or redistribution; and 3) recovering and processing end-of-life products and materials. - The research reveals distinct roles that different organizations play (e.g., scavengers, decomposers, producers) and provides strategic blueprints for companies to select partners and define value exchanges to successfully implement circular principles.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the circular economy. It’s a powerful idea, but how do businesses actually make it work? We’re looking at a fascinating study titled "Actor-Value Constellations in Circular Ecosystems." Host: In essence, the researchers analyzed 48 real-world examples of circular economies to map out how different companies collaborate to create sustainable value, providing a practical guide for organizations ready to make the shift. Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, the idea of a circular economy isn't new, but this study suggests businesses are struggling with the execution. What's the big problem they're facing? Expert: Exactly. The core problem is that the circular economy depends on collaboration. It’s not enough for one company to change its ways; it requires an entire ecosystem of partners—producers, consumers, recyclers, service providers—to work together. Expert: But there's a lack of clarity on how these actors should interact and exchange value. This ambiguity leads to inefficiencies, misaligned incentives, and ultimately, missed opportunities. Businesses know they need to collaborate, but they don't have a clear map for how to do it. Host: So they needed a map. How did the researchers go about creating one? What was their approach? Expert: They took a very practical route. They analyzed 48 successful circular businesses, from fashion to food to electronics. For each one, they used a method called e³-value modeling. Expert: Think of it as creating a detailed flowchart for the business ecosystem. It visually maps out who all the actors are, what they do, and how value—whether it's a physical product, data, or money—flows between them. By comparing these maps, they could spot recurring patterns. Host: And what patterns emerged? What were the key findings from this analysis? Expert: The most significant finding is that these complex interactions aren't random. They fall into eight distinct patterns, which the study calls 'constellations.' These are essentially proven models for collaboration. Expert: These eight constellations are grouped into three overarching dimensions. The first is 'Circularity-driven Innovation,' which is all about designing out waste from the very beginning. Expert: The second is 'Resource Efficiency Optimization.' This focuses on maximizing the use of products that already exist through things like sharing, renting, or resale platforms. Expert: And the third is 'End-of-Life Product and Material Recovery.' This is what we typically think of as recycling—collecting used products and turning them into valuable new materials. Host: Could you give us a quick example to bring one of those constellations to life? Expert: Certainly. In that third dimension, 'End-of-Life Recovery,' there’s a constellation called 'Scavenger-led EOL recovery.' A great example is a company like Mazuma Mobile. Expert: Mazuma acts as the 'scavenger' by buying old mobile phones from consumers. They then partner with 'decomposers'—refurbishing specialists—to restore the phones. Finally, they redistribute the reconditioned phones for resale. It’s a complete loop orchestrated by a central player. Host: That makes it very clear. So, this brings us to the most important question for our listeners. Why do these eight constellations matter for business leaders? How can they use this? Expert: This is the most practical part. These constellations serve as strategic blueprints. A business leader no longer has to guess how to build a circular model; they can look at these eight patterns and see which one fits their goals. Expert: For instance, if your company wants to launch a rental service, you can look at the 'Intermediated Resource Redistribution' constellation. The study shows you the key partners you'll need and how value needs to flow between you, your suppliers, and your customers. Expert: It also highlights the critical role of digital technology. Many of these models, especially those in resource sharing and product take-back, rely on digital platforms for matchmaking, tracking, and data analysis to keep the ecosystem running smoothly. Host: So it’s a framework for both strategy and execution. Alex, thank you for breaking that down for us. Host: To sum up, while the circular economy requires complex collaboration, this study shows it doesn't have to be a mystery. By identifying eight recurring business constellations, it provides a clear roadmap. Host: For business leaders, this research offers practical blueprints to choose the right partners, define winning strategies, and successfully transition to a more sustainable, circular future. Host: A huge thank you to our expert, Alex Ian Sutherland. And thank you for tuning in to A.I.S. Insights.
International Conference on Wirtschaftsinformatik (2025)
To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education
Nadine Bisswang, Georg Herzwurm, Sebastian Richter
This study proposes a taxonomy to help educators in higher education systematically assess whether virtual reality (VR) is suitable for specific learning content. The taxonomy is grounded in established theoretical frameworks and was developed through a multi-stage process involving literature reviews and expert interviews. Its utility is demonstrated through an illustrative scenario where an educator uses the framework to evaluate a specific course module.
Problem
Despite the increasing enthusiasm for using virtual reality (VR) in education, its suitability for specific topics remains unclear. University lecturers, particularly those without prior VR experience, lack a structured approach to decide when and why VR would be an effective teaching tool. This gap leads to uncertainty about its educational benefits and hinders its effective adoption.
Outcome
- Developed a taxonomy that structures the reasons for and against using VR in higher education across five dimensions: learning objective, learning activities, learning assessment, social influence, and hedonic motivation. - The taxonomy provides a balanced overview by organizing 24 distinct characteristics into factors that favor VR use ('+') and factors that argue against it ('-'). - This framework serves as a practical decision-support tool for lecturers to make an informed initial assessment of VR's suitability for their specific learning content without needing prior technical experience. - The study demonstrates the taxonomy's utility through an application to a 'warehouse logistics management' learning scenario, showing how it can guide educators' decisions.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the world of virtual reality in education and training, looking at a study titled, "To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education". Host: With me is our analyst, Alex Ian Sutherland. Alex, this study seems timely. It proposes a framework to help educators systematically assess if VR is actually the right tool for specific learning content. Expert: That's right, Anna. It’s about moving beyond the hype and making informed decisions. Host: So, let's start with the big problem. We hear constantly that VR is the future, but what's the real-world challenge this study is addressing? Expert: The core problem is uncertainty. An educator, or a corporate trainer for that matter, might be excited by VR's potential, but they lack a clear, structured way to decide if it's genuinely effective for their specific topic. Host: So they’re asking themselves, "Should I invest time and money into creating a VR module for this?" Expert: Exactly. And without a framework, that decision is often based on gut feeling rather than evidence. This can lead to ineffective adoption, where the technology doesn't actually improve learning outcomes, or it gets used for the wrong things. Host: It’s the classic ‘shiny new toy’ syndrome. So how did the researchers create a tool to solve this? What was their approach? Expert: It was a very practical, multi-stage process. They didn't just theorize. They combined established educational frameworks with real-world experience. They conducted sixteen in-depth interviews with experts—university lecturers with years of VR experience and the developers who actually build these applications. Host: So they grounded the theory in practical wisdom. Expert: Precisely. This allowed them to build a comprehensive framework that is both academically sound and relevant to the people who would actually use it. Host: And this framework is what the study calls a 'taxonomy'. For our listeners, what does that actually look like? Expert: Think of it as a detailed decision-making checklist. It organizes the reasons for and against using VR across five key dimensions. Host: What are those dimensions? Expert: The first three are directly about the teaching process: the **Learning Objective**—what you want people to learn; the **Learning Activities**—how they will learn it; and the **Learning Assessment**—how you’ll measure if they've learned it. Host: That makes sense. Objective, activity, and assessment. What are the other two? Expert: The other two are about the human and social context. One is **Social Influence**, which considers whether colleagues and the organization support the use of VR. The other is **Hedonic Motivation**, which is really about whether people are personally and professionally motivated to use the technology. Host: And I understand the framework gives a balanced view, right? Expert: Yes, and that’s a key strength. For each of those five areas, the taxonomy lists characteristics that favor using VR—marked with a plus—and those that argue against it—marked with a minus. It gives you a clear, balanced scorecard to inform your decision. Host: This is fascinating. While the study focuses on higher education, the implications for the business world seem enormous, particularly for corporate training. What is the key takeaway for a business leader? Expert: The takeaway is that this framework provides a strategic tool for investing in training technology. You can substitute 'lecturer' for 'corporate L&D manager,' and the challenges are identical. It helps a business move from asking, "Should we use VR?" to the much smarter question, "Where will VR deliver the best return on investment for us?" Host: Could you walk us through a business example? Expert: Of course. The study uses the example of teaching 'warehouse logistics management.' For a large retail or logistics company, training new employees on the layout and flow of a massive fulfillment center is a real challenge. It can be costly, disruptive to operations, and even unsafe. Host: So how would the taxonomy help here? Expert: A training manager would see a strong case for VR. The *learning objective* is to understand a complex physical space. The *learning activity* is exploration. VR allows a new hire to do that safely, on-demand, and without setting foot on a busy warehouse floor. It makes training scalable and reduces disruption. Host: And importantly, it also helps identify where *not* to use VR. Expert: Exactly. If your training module is on new compliance regulations or software that's purely text and forms, the taxonomy would quickly show that VR is overkill. You don't need an immersive, 3D world for that. This prevents companies from wasting money on VR for tasks where a simple video or e-learning module is more effective. Host: So, in essence, it’s not about being for or against VR, but about being strategic in its application. This framework gives organizations a clear, evidence-based method to decide where this powerful technology truly fits. Host: A brilliant tool for any business leader exploring immersive learning technologies. Alex Ian Sutherland, thank you for breaking down this study for us. Expert: My pleasure, Anna. Host: And to our audience, thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
International Conference on Wirtschaftsinformatik (2025)
An Automated Identification of Forward Looking Statements on Financial Metrics in Annual Reports
Khanh Le Nguyen, Diana Hristova
This study presents a three-phase automated Decision Support System (DSS) designed to extract and analyze forward-looking statements on financial metrics from corporate 10-K annual reports. The system uses Natural Language Processing (NLP) to identify relevant text, machine learning models to predict future metric growth, and Generative AI to summarize the findings for users. The goal is to transform unstructured narrative disclosures into actionable, metric-level insights for investors and analysts.
Problem
Manually extracting useful information from lengthy and increasingly complex 10-K reports is a significant challenge for investors seeking to predict a company's future performance. This difficulty creates a need for an automated system that can reliably identify, interpret, and forecast financial metrics based on the narrative sections of these reports, thereby improving the efficiency and accuracy of financial decision-making.
Outcome
- The system extracted forward-looking statements related to financial metrics with 94% accuracy, demonstrating high reliability. - A Random Forest model outperformed a more complex FinBERT model in predicting future financial growth, indicating that simpler, interpretable models can be more effective for this task. - AI-generated summaries of the company's outlook achieved a high average rating of 3.69 out of 4 for factual consistency and readability, enhancing transparency for decision-makers. - The overall system successfully provides an automated pipeline to convert dense corporate text into actionable financial predictions, empowering investors with transparent, data-driven insights.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "An Automated Identification of Forward Looking Statements on Financial Metrics in Annual Reports." Host: It introduces an A.I. system designed to read complex corporate reports and pull out actionable insights for investors. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. Anyone who's tried to read a corporate 10-K report knows they can be incredibly dense. What's the specific problem this study is trying to solve? Expert: The core problem is that these reports, which are essential for predicting a company's future, are getting longer and more complex. The study notes that about 80% of a 10-K is narrative text, not just tables of numbers. Expert: For an investor or analyst, manually digging through hundreds of pages to find clues about future performance is a massive, time-consuming challenge. Host: And what kind of clues are they looking for in all that text? Expert: They're searching for what are called "forward-looking statements." These are phrases where management talks about the future, using words like "we anticipate," "we expect," or "we believe." These statements, especially when tied to specific financial metrics like revenue or income, are goldmines of information. Host: So this study built an automated system to find that gold. How does it work? Expert: Exactly. It’s a three-phase system. First, it uses Natural Language Processing to scan the 10-K report and automatically extract only those forward-looking sentences that are linked to key financial metrics. Expert: In the second phase, it takes that text and uses machine learning models to predict the future growth of those metrics. Essentially, it's translating the company's language into a quantitative forecast. Expert: And finally, in the third phase, it uses Generative AI to create a clear, concise summary of the company's outlook. This makes the findings transparent and easily understandable for the end-user. Host: It sounds like a complete pipeline from dense text to a clear prediction. What were the key findings when they tested this system? Expert: The results were very strong. First, the system was able to extract the correct forward-looking statements with 94% accuracy, which shows it's highly reliable. Host: That’s a great start. What about the prediction phase? Expert: This is one of the most interesting findings. They tested two models: a complex, finance-specific model called FinBERT, and a simpler one called a Random Forest. The simpler Random Forest model actually performed better at predicting financial growth. Host: That is surprising. You’d think the more sophisticated A.I. would have the edge. Expert: It’s a great reminder that in A.I., bigger and more complex isn't always better. For a specific, well-defined task, a more straightforward and interpretable model can be more effective. Host: And what about those A.I.-generated summaries? Were they useful? Expert: They were a huge success. On a 4-point scale, the summaries received an average rating of 3.69 for factual consistency and readability. This proves the system can not only find and predict but also communicate its findings effectively. Host: This is where it gets really interesting for our audience. Let's talk about the bottom line. Why does this matter for business professionals? Expert: For investors and financial analysts, it's a game-changer for efficiency and accuracy. It transforms days of manual research into an automated process, providing a data-driven forecast based on the company's own narrative. It helps level the playing field. Host: And what about for the companies writing these reports? Is there a takeaway for them? Expert: Absolutely. It underscores the growing importance of clarity in financial disclosures. This study shows that the specific language companies use to describe their future is being quantified and used for predictions. Vague phrasing, which the study found was an issue for cash flow metrics, can now be automatically flagged. Host: So this is about turning all that corporate language, that unstructured data, into something structured and actionable. Expert: Precisely. It’s a perfect example of using A.I. to unlock the value hidden in vast amounts of text, enabling faster, more transparent, and ultimately better-informed financial decisions. Host: Fantastic. So, to summarize, this study has developed an automated A.I. pipeline that can read, interpret, and forecast from dense 10-K reports with high accuracy. Host: The key takeaways for us are that simpler A.I. models can outperform complex ones for certain tasks, and that Generative A.I. is proving to be a reliable tool for making complex data accessible. Host: Alex Ian Sutherland, thank you for making this complex study so clear for us. Expert: My pleasure, Anna. Host: And to our listeners, thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time.
International Conference on Wirtschaftsinformatik (2025)
Service Innovation through Data Ecosystems – Designing a Recombinant Method
Philipp Hansmeier, Philipp zur Heiden, and Daniel Beverungen
This study designs a new method, RE-SIDE (recombinant service innovation through data ecosystems), to guide service innovation within complex, multi-actor data environments. Using a design science research approach, the paper develops and applies a framework that accounts for the broader repercussions of service system changes at an ecosystem level, demonstrated through an innovative service enabled by a cultural data space.
Problem
Traditional methods for service innovation are designed for simple systems, typically involving just a provider and a customer. These methods are inadequate for today's complex 'service ecosystems,' which are driven by shared data spaces and involve numerous interconnected actors. There is a lack of clear, actionable methods for companies to navigate this complexity and design new services effectively at an ecosystem level.
Outcome
- The study develops the RE-SIDE method, a new framework specifically for designing services within complex data ecosystems. - The method extends existing service engineering standards by adding two critical phases: an 'ecosystem analysis phase' for identifying partners and opportunities, and an 'ecosystem transformation phase' for adapting to ongoing changes. - It provides businesses with a structured process to analyze the broader ecosystem, understand their own role, and systematically co-create value with other actors. - The paper demonstrates the method's real-world applicability by designing a 'Culture Wallet' service, which uses shared data from cultural institutions to offer personalized recommendations and rewards to users.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In today's hyper-connected world, innovation rarely happens in a vacuum. It happens in complex networks of partners, customers, and data. So how can businesses navigate this? Today we're looking at a fascinating study titled "Service Innovation through Data Ecosystems – Designing a Recombinant Method".
Host: It proposes a new method to guide service innovation in these complex, multi-company data environments. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Why did we need a new method for service innovation in the first place? What problem is this study trying to solve?
Expert: The core problem is that most traditional methods for creating new services are outdated. They were designed for a simple, two-way relationship: a single company providing a service to a single customer.
Host: Like a coffee shop selling a latte.
Expert: Exactly. But today, we operate in what the study calls 'service ecosystems'. Think about the connected car industry or smart agriculture. These aren't simple transactions. You have dozens of companies—carmakers, software developers, data providers, insurance firms—all interconnected and sharing data to create value.
Host: And the old rulebook doesn't apply to that complex game.
Expert: Precisely. The old methods fall short. They don't give companies a clear, actionable roadmap for how to find partners, leverage shared data, and design new services in this crowded and constantly changing environment. There was a real gap between the potential of these data ecosystems and the ability of businesses to innovate within them.
Host: So, how did the researchers approach tackling this challenge?
Expert: They used an approach called design science research. In simple terms, they didn't just study the problem from afar; they rolled up their sleeves and built a practical solution. They designed and developed a new method—a tangible framework that companies can actually use to engineer new services at an ecosystem level.
Host: And that new method is called RE-SIDE. Tell us about the key findings. What makes this framework different?
Expert: The biggest innovation in the RE-SIDE method is that it adds two critical new phases to existing service design processes. The first is the 'Ecosystem Analysis Phase'.
Host: What does that involve?
Expert: It's essentially a strategic reconnaissance mission. Before you even start designing a service, the method tells you to stop and map the entire landscape. Who are the other actors? What data do they have? Where are the opportunities for collaboration? It forces you to look beyond your own four walls and understand the entire playing field.
Host: That makes a lot of sense. And what’s the second new phase?
Expert: That's the 'Ecosystem Transformation Phase'. This acknowledges that these ecosystems are alive—they're constantly evolving. New partners join, new data becomes available, customer needs change. This phase is a continuous process of monitoring, adapting, and transforming your service to stay relevant and aligned with the ecosystem's evolution.
Host: So it's not a one-and-done process. It builds in agility.
Expert: Exactly. And the study demonstrated how this works with a fantastic real-world example: a service they call the 'Culture Wallet'.
Host: A wallet for culture? I’m intrigued.
Expert: Imagine an app on your phone. Multiple cultural institutions—museums, theaters, concert venues—all agree to share their event data into a common, secure data space. The 'Culture Wallet' app uses this shared data to give you personalized recommendations for events near you. It could also act as a digital loyalty card, rewarding you with discounts for attending multiple venues.
Host: I can see how that couldn't be built by one institution alone.
Expert: Absolutely. To create the Culture Wallet, a developer would have to use the RE-SIDE method. They'd first analyze the ecosystem of cultural partners, then select the right ones to collaborate with, and finally, be ready to adapt as new venues join or the available data changes over time.
Host: This is incredibly practical. Let's get to the bottom line, Alex. Why does this matter for business leaders listening today? What are the key takeaways?
Expert: I see three major takeaways. First, it provides a blueprint for shifting from pure competition to collaborative innovation. In a data ecosystem, your greatest opportunities may come from partnering with others, and this method shows you how to do it strategically.
Host: So it’s a guide to co-creation.
Expert: Yes. Second, it de-risks innovation. By forcing you to do that ecosystem analysis upfront, you're making much more informed decisions about where to invest your resources, who to partner with, and what services are actually viable. It reduces the guesswork.
Host: And the third takeaway?
Expert: It's about building for resilience. That 'Ecosystem Transformation' phase is the key to future-proofing your services. Businesses that build adaptability into their DNA from the start are the ones that will not only survive but thrive in today's dynamic markets.
Host: So it’s about having a strategic map to not just enter, but successfully navigate, these complex new business environments.
Expert: That's the perfect way to put it.
Host: To sum it up for our listeners: traditional service innovation models are insufficient for today's interconnected data ecosystems. This study delivers the RE-SIDE method, a practical framework that adds crucial ecosystem analysis and transformation phases. It gives businesses a clear process to collaborate, innovate, and adapt in a constantly changing world.
Host: Alex, thank you so much for these powerful insights.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we decode another key study shaping the future of business and technology.
Service Ecosystem, Data Ecosystem, Data Space, Service Engineering, Design Science Research
International Conference on Wirtschaftsinformatik (2025)
AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework
Arnold F. Arz von Straussenburg, Jens J. Marga, Timon T. Aldenhoff, and Dennis M. Riehle
This study proposes a design theory to safely and ethically integrate Artificial Intelligence (AI) agents into the governance of data trusts. The paper introduces a normative framework that unifies fiduciary principles, institutional trust, and AI ethics. It puts forward four specific design principles to guide the development of AI systems that can act as responsible governance actors within these trusts, ensuring they protect beneficiaries' interests.
Problem
Data trusts are frameworks for responsible data management, but integrating powerful AI systems creates significant ethical and security challenges. AI can be opaque and may have goals that conflict with the interests of data owners, undermining the fairness and accountability that data trusts are designed to protect. This creates a critical need for a governance model that allows organizations to leverage AI's benefits without compromising their fundamental duties to data owners.
Outcome
- The paper establishes a framework to guide the integration of AI into data trusts, ensuring AI actions align with ethical and fiduciary responsibilities. - It introduces four key design principles for AI agents: 1) Fiduciary alignment to prioritize beneficiary interests, 2) Accountability through complete traceability and oversight, 3) Transparent explainability for all AI decisions, and 4) Autonomy-preserving oversight to maintain robust human supervision. - The research demonstrates that AI can enhance efficiency in data governance without eroding stakeholder trust or ethical standards if implemented correctly. - It provides actionable recommendations, such as automated audits and dynamic consent mechanisms, to ensure the responsible use of AI within data ecosystems for the common good.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re exploring a critical challenge at the intersection of data and artificial intelligence. We’ll be discussing a new study titled "AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework." Host: In essence, the study proposes a new way to safely and ethically integrate AI into the governance of data trusts, which are frameworks designed to manage data responsibly on behalf of others. Host: With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Why is integrating AI into these data trusts such a significant problem for businesses? Expert: Well Anna, organizations are increasingly using data trusts to build confidence with their customers and partners. They’re a promise of responsible data management. But when you introduce powerful AI, you introduce risk. Expert: The study highlights that many AI systems are like "black boxes." We don't always know how they make decisions. This opacity can clash with the core duties of a data trust, which are based on loyalty and transparency. Expert: The fundamental problem is a tension between the efficiency AI offers and the accountability that a trust demands. You could have an AI that's optimizing for a business goal that isn't perfectly aligned with the interests of the people who provided the data, and that's a serious ethical and legal breach. Host: So how did the researchers approach solving this high-stakes problem? Expert: They took a design-focused approach. Instead of just theorizing, they developed a concrete framework by synthesizing insights from three distinct fields: the legal principles of fiduciary duty, the organizational science of institutional trust, and the core tenets of AI ethics. Expert: This allowed them to build a practical blueprint that translates these high-level ethical goals into actionable design principles for building AI systems. Host: And what were the main findings? What does this blueprint actually look like? Expert: The study outcome is a set of four clear design principles for any AI agent operating within a data trust. Think of them as the pillars for building trustworthy AI governance. Expert: The first is **Fiduciary Alignment**. This means the AI must be explicitly designed to prioritize the interests of the data owners, or beneficiaries, above all else. Its goals have to be their goals. Expert: Second is **Accountability through Traceability**. Since an AI can't be held legally responsible, every action it takes must be recorded in an unchangeable log. This creates a complete audit trail, so a human is always accountable. Host: So you can always trace a decision back to its source and understand the context. Expert: Exactly. The third principle builds on that: **Transparent Explainability**. The AI's decisions can't be a mystery. Stakeholders must be able to see and understand, in simple terms, why a decision was made. The study suggests things like real-time transparency dashboards. Expert: And finally, the fourth principle is **Autonomy-Preserving Oversight**. This is crucial. It means humans must always have the final say. Data owners should have dynamic control over their consent, not just a one-time checkbox, and human trustees must always have the power to override the AI. Host: This all sounds incredibly robust. But let's get to the bottom line for our listeners. Why does this matter for business leaders? What are the practical takeaways? Expert: This is the most important part. For businesses, this framework is essentially a roadmap for de-risking AI adoption in data-sensitive areas. Following these principles helps you build genuine, provable trust with your customers. Expert: In a competitive market, being the company that can demonstrate truly responsible AI governance is a massive advantage. It moves trust from a vague promise to a verifiable feature of your service. Expert: The study also provides actionable ideas. Businesses can start implementing dynamic consent portals where users can actively manage how their data is used by AI. They can build automated audit systems that flag any AI behavior that deviates from policy, ensuring a human is always in the loop for critical decisions. Expert: Ultimately, adopting a framework like this is about future-proofing your business. Data regulations are only getting stricter. Building this ethical and accountable foundation now isn't just about compliance; it's about leading the way and building a sustainable, trust-based relationship with your market. Host: So, to summarize, the challenge is using powerful AI in data trusts without eroding the very foundation of trust they stand on. Host: This study offers a solution through four design principles: ensuring the AI is aligned with beneficiary interests, making it fully accountable and traceable, keeping it transparent, and, most importantly, always preserving meaningful human oversight. Host: Alex, thank you for breaking down this complex and vital topic for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Data Trusts, Normative Framework, AI Governance, Fairness, AI Agents
International Conference on Wirtschaftsinformatik (2025)
Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective
Lukas Grützner, Moritz Goldmann, Michael H. Breitner
This study empirically assesses the impact of Generative AI (GenAI) on the social aspects of business-IT collaboration. Using a literature review, an expert survey, and statistical modeling, the research explores how GenAI influences communication, mutual understanding, and knowledge sharing between business and technology departments.
Problem
While aligning IT with business strategy is crucial for organizational success, the social dimension of this alignment—how people communicate and collaborate—is often underexplored. With the rapid integration of GenAI into workplaces, there is a significant research gap concerning how these new tools reshape the critical human interactions between business and IT teams.
Outcome
- GenAI significantly improves formal business-IT collaboration by enhancing structured knowledge sharing, promoting the use of a common language, and increasing formal interactions. - The technology helps bridge knowledge gaps by making technical information more accessible to business leaders and business context clearer to IT leaders. - GenAI has no significant impact on informal social interactions, such as networking and trust-building, which remain dependent on human-driven leadership and engagement. - Management must strategically integrate GenAI to leverage its benefits for formal communication while actively fostering an environment that supports crucial interpersonal collaboration.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and human ingenuity, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into how Generative AI is changing one of the most critical relationships in any company: the collaboration between business and IT departments. Host: We’re exploring a fascinating study titled "Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective". It empirically assesses how tools like ChatGPT are influencing communication, mutual understanding, and knowledge sharing between these essential teams. Host: And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Getting business and IT teams on the same page has always been a challenge, but why is this 'social alignment', as the study calls it, so critical right now? Expert: It’s critical because technical integration isn't enough for success. Social alignment is about the human element—the relationships, shared values, and mutual understanding between business and IT leaders. Expert: Without it, organizations see reduced benefits from their tech investments and lose strategic agility. With GenAI entering the workplace so rapidly, there's been a huge question mark over whether these tools help or hinder those crucial human connections. Host: So there's a real gap in our understanding. How did the researchers go about measuring something as intangible as human collaboration? Expert: They used a really robust, three-part approach. First, they conducted an extensive literature review to build a solid theoretical foundation. Then, they surveyed 61 senior executives from both business and IT across multiple countries to get real-world data. Expert: Finally, they used a sophisticated statistical model to analyze those survey responses, allowing them to pinpoint the specific ways GenAI usage impacts collaboration. Host: That sounds very thorough. Let's get to the results. What did they find? Expert: The findings were fascinating, primarily because of the distinction they revealed. The study found that GenAI significantly improves *formal* collaboration. Host: What do you mean by formal collaboration in this context? Expert: Think of the structured parts of work. GenAI excels at enhancing structured knowledge sharing, creating standardized reports, and helping to establish a common language between departments. For instance, it can translate complex technical specs into a simple summary for a business leader. Host: So it helps with the official processes. What about the other side of the coin? Expert: That's the most important finding. The study showed that GenAI has no significant impact on *informal* social interactions. These are the human-driven activities like networking, building trust over lunch, or spontaneous chats in the hallway that often lead to breakthroughs. Those remain entirely dependent on human leadership and engagement. Host: So GenAI is a tool for structure, but not a replacement for relationships. Did the study find it helps bridge the knowledge gap between these teams? Expert: Absolutely. This was another major outcome. GenAI acts as a kind of universal translator. It makes technical information more accessible to business people and, in reverse, it makes business context and strategy clearer to IT leaders. It effectively helps create a shared understanding where one might not have existed before. Host: This is incredibly relevant for anyone in management. Alex, let’s bring it all home. If I'm a business leader listening now, what is the key takeaway? What should I do differently on Monday? Expert: The biggest takeaway is to be strategic. Don’t just deploy GenAI and hope for the best. The study suggests you should use these tools to streamline your formal communication channels—think AI-assisted meeting summaries, project documentation, and internal knowledge bases. This frees up valuable time. Host: And what about the informal side you mentioned? Expert: This is the crucial part. While you're automating the formal stuff, you must actively double down on fostering human-to-human interaction. The study makes it clear that trust and strong working relationships don’t happen by accident. Leaders need to consciously create opportunities for that interpersonal connection, because the AI won't do it for you. Host: So it’s a 'best of both worlds' approach. Use AI to create efficiency in structured tasks, which then gives leaders more time and space to focus on culture and true human collaboration. Expert: Exactly. It’s about leveraging technology to empower people, not replace the connections between them. Host: A powerful conclusion. To recap for our listeners: this study shows that Generative AI is a fantastic tool for improving the formal, structured side of business-IT collaboration, helping to bridge knowledge gaps and create a common language. Host: However, it doesn’t affect the informal, human-to-human interactions that build trust and culture. The key for business leaders is to implement AI strategically for efficiency, while actively nurturing the interpersonal connections that truly drive success. Host: Alex Ian Sutherland, thank you for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Information systems alignment, social, GenAI, PLS-SEM
International Conference on Wirtschaftsinformatik (2025)
Value Propositions of Personal Digital Assistants for Process Knowledge Transfer
Paula Elsensohn, Mara Burger, Marleen Voß, and Jan vom Brocke
This study investigates the value propositions of Personal Digital Assistants (PDAs), a type of AI tool, for improving how knowledge about business processes is transferred within organizations. Using qualitative interviews with professionals across diverse sectors, the research identifies nine specific benefits of using PDAs in the context of Business Process Management (BPM). The findings are structured into three key dimensions: accessibility, understandability, and guidance.
Problem
In modern businesses, critical knowledge about how work gets done is often buried in large amounts of data, making it difficult for employees to access and use effectively. This inefficient transfer of 'process knowledge' leads to errors, inconsistent outcomes, and missed opportunities for improvement. The study addresses the challenge of making this vital information readily available and understandable to the right people at the right time.
Outcome
- The study identified nine key value propositions for using PDAs to transfer process knowledge, grouped into three main categories: accessibility, understandability, and guidance. - PDAs improve accessibility by automating tasks and enabling employees to find knowledge and documentation much faster than through manual searching. - They enhance understandability by facilitating user education, simplifying the onboarding of new employees, and performing context-aware analysis of processes. - PDAs provide active guidance by offering real-time process advice, helping to optimize and standardize workflows, and supporting better decision-making with relevant data.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into how AI can unlock one of a company's most valuable but often hidden assets: its process knowledge. We're looking at a study titled "Value Propositions of Personal Digital Assistants for Process Knowledge Transfer". Host: It explores how AI tools, like the digital assistants on our phones and computers, can fundamentally change how employees learn and execute business processes. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the core issue. The study summary says that critical knowledge on 'how work gets done' is often buried in data. What does that problem look like in the real world? Expert: It’s a huge, everyday problem. Imagine a new employee trying to figure out how to submit a complex expense report, or a sales manager trying to follow a new client onboarding protocol. Expert: The information is *somewhere*—in a hundred-page PDF, an old email chain, or a clunky internal wiki. The study points out that these traditional methods are failing to provide timely and relevant information. This leads to wasted time, costly errors, and inconsistent work across the organization. Host: So we have the right information, but people just can't get to it when they need it. How did the researchers investigate if AI assistants could be the solution? Expert: They went straight to the source. They conducted in-depth interviews with twelve professionals from various sectors, like finance and industry—people in managerial roles who have real-world experience with these challenges and technologies. Expert: They asked them about their experiences with Personal Digital Assistants, or PDAs, and how they could be used to transfer this vital process knowledge. They then analyzed these conversations to identify the most significant benefits. Host: And what did they find? The summary groups the benefits into three main categories: accessibility, understandability, and guidance. Let's start with accessibility. Expert: Accessibility is about speed and simplicity. The professionals interviewed said that instead of manually searching, an employee can just ask a PDA, "What's the next step for processing this invoice?" Expert: The PDA can find the answer instantly. It can even automate parts of the task, like opening the right software or filling out a form. One interviewee described it as creating a "single source of truth" that’s easy for everyone to access. Host: So it’s not just finding information, but also getting a head start on the work. What about the next category, understandability? Expert: Understandability is about making sure the knowledge actually makes sense to the user. This is where PDAs really shine. For example, they can provide interactive tutorials to educate employees on a new process. Expert: The study highlights their value in onboarding new hires. A new employee can ask the PDA dozens of questions they might be hesitant to ask a busy colleague. The system can also perform context-aware analysis, meaning it integrates with other business systems like a CRM to provide information that’s specific to the employee’s exact situation. Host: That personalization seems critical. This brings us to the final dimension: guidance. How is that different from just making information understandable? Expert: Guidance is proactive. It's about the PDA not just answering questions, but actively steering the employee through a process. One interviewee called this "the next level." Expert: Imagine a PDA offering real-time, step-by-step instructions as you complete a task. It can also help optimize workflows by comparing how a process is being done to an ideal model and suggesting improvements. For managers, this is huge. As one professional in the study noted, if you have 10,000 employees saving 10 minutes a day, the impact is massive. Host: That’s a powerful example. So, Alex, let’s bring it all together. For the business leaders listening, what is the key takeaway? Why does this matter for their bottom line? Expert: It matters because it addresses core operational challenges. First, you get a significant boost in efficiency and productivity. Less time searching means more time doing value-added work. Expert: Second, it drives consistency and quality. By using a PDA as a single source of truth, you reduce errors and ensure that critical processes, especially in regulated fields, are followed correctly every single time. Expert: And finally, it creates a more agile and knowledgeable workforce. Employees are empowered with the information they need, when they need it. This speeds up training, improves decision-making, and builds a foundation for continuous improvement. Host: So it's about making our processes, and our people, smarter. To recap: businesses are struggling with making their internal process knowledge useful. This study shows that AI-powered digital assistants can solve this by making that knowledge accessible, understandable, and by providing active guidance. Host: The result is a more efficient, consistent, and intelligent organization. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.
Personal Digital Assistant, Value Proposition, Process Knowledge, Business Process Management, Guidance
International Conference on Wirtschaftsinformatik (2025)
Exploring the Design of Augmented Reality for Fostering Flow in Running: A Design Science Study
Julia Pham, Sandra Birnstiel, Benedikt Morschheuser
This study explores how to design Augmented Reality (AR) interfaces for sport glasses to help runners achieve a state of 'flow,' or peak performance. Using a Design Science Research approach, the researchers developed and evaluated an AR prototype over two iterative design cycles, gathering feedback from nine runners through field tests and interviews to derive design recommendations.
Problem
Runners often struggle to achieve and maintain a state of flow due to the difficulty of monitoring performance without disrupting their rhythm, especially in dynamic outdoor environments. While AR glasses offer a potential solution by providing hands-free feedback, there is a significant research gap on how to design effective, non-intrusive interfaces that support, rather than hinder, this immersive state.
Outcome
- AR interfaces can help runners achieve flow by providing continuous, non-intrusive feedback directly in their field of view, fulfilling the need for clear goals and unambiguous feedback. - Non-numeric visual cues, such as expanding circles or color-coded warnings, are more effective than raw numbers for conveying performance data without causing cognitive overload. - Effective AR design for running must be adaptive and customizable, allowing users to choose the metrics they see and control when the display is active to match personal goals and minimize distractions. - The study produced four key design recommendations: provide easily interpretable feedback beyond numbers, ensure a seamless and embodied interaction, allow user customization, and use a curiosity-inducing design to maintain engagement.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re looking at how technology can help us achieve that elusive state of peak performance, often called 'flow'. We’re diving into a fascinating study titled "Exploring the Design of Augmented Reality for Fostering Flow in Running." Essentially, it explores how to design AR interfaces for sport glasses to help runners get, and stay, in the zone. Here to break it down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: So, Alex, let's start with the big picture. Most serious runners I know use a smartwatch. What's the problem this study is trying to solve that a watch doesn't already?
Expert: That's the perfect question. The problem is disruption. To get into a state of flow, you need focus. But to check your pace or heart rate on a watch, you have to break your form, look down, and interact with a device. That single action can pull you right out of your rhythm.
Host: It completely breaks your concentration.
Expert: Exactly. And AR sport glasses offer a hands-free solution by putting data directly in your field of view. But that creates a new challenge: how do you show that information without it becoming just another distraction? That’s the critical design gap this study tackles.
Host: So how did the researchers approach this? It sounds tricky to get right.
Expert: They used a very practical, hands-on method called Design Science Research. They didn't just theorize; they built and tested. They took a pair of commercially available AR glasses and designed an interface. Then, they had nine real runners use the prototype on their actual training routes.
Host: And they got feedback?
Expert: Yes, in two distinct cycles. The first design was very basic—it just showed the runner's heart rate as a number. After getting feedback, they created a second, more advanced version based on what the runners said they needed. This iterative process of build, test, and refine is key.
Host: I'm curious what they found. Did the second version work better?
Expert: It worked much better. And this leads to one of the biggest findings: for high-focus activities, non-numeric visual cues are far more effective than raw numbers.
Host: What does that mean in practice? What did the runners see?
Expert: Instead of just a number, the improved design used a rotating circle that would expand as the runner approached their target heart rate, and then fade away once they were in the zone to minimize distraction. It also used a simple red frame as a warning if their heart rate got too high. It’s about making the data interpretable at a glance, without conscious thought.
Host: So it becomes more of a feeling than a number you have to process. What else stood out?
Expert: Customization was absolutely critical. The study found that a one-size-fits-all approach fails because runners have different goals. Some want to track pace, others heart rate. Experienced runners might prefer minimal data, relying more on how their body feels, while beginners want more constant guidance.
Host: And the AR interface needed to adapt to that.
Expert: Precisely. The system needs to be adaptive, allowing users to choose their metrics and even turn the display off completely with a simple button press. Giving the user that control is essential to supporting flow, not breaking it.
Host: This is all very interesting for the fitness tech world, but let's broaden it out for our business audience. Why does a study about runners and AR matter for, say, a logistics manager or a software developer?
Expert: Because this is a masterclass in effective user interface design for any high-concentration task. The core principle—reducing cognitive load—is universal. Think about a technician repairing complex machinery using AR instructions. You don’t want them distracted by dense text; you want simple, intuitive visual cues, just like the expanding circle for the runner.
Host: So this is about the future of how we interact with information in any professional setting.
Expert: Absolutely. The second big takeaway for business is the power of deep personalization. This study shows that to create a truly valuable product, you have to allow users to tailor the experience to their specific goals and expertise level. This isn't just about changing the color scheme; it's about fundamentally altering the information and interface based on the user's context.
Host: And are there other applications that come to mind?
Expert: Definitely. Think of heads-up displays for pilots or surgeons. In those fields, providing critical data without causing distraction can be a matter of life and death. This study provides a blueprint for what the researchers call "embodied interaction," where the technology feels like a seamless extension of the user, not a separate tool they have to consciously operate. That is the holy grail for a huge range of industries.
Host: So, to summarize: the future of effective digital interfaces, especially in AR, isn't about throwing more data at people. It's about presenting the right information, in the most intuitive way possible, and giving the user ultimate control.
Expert: You've got it. It’s about designing for flow, whether you're on a 10k run or a factory floor.
Host: A powerful insight into a future that’s coming faster than we think. Alex Ian Sutherland, thank you so much for your analysis today.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning into A.I.S. Insights. Join us next time as we continue to connect research with reality.
International Conference on Wirtschaftsinformatik (2025)
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?
Lasse Bohlen, Sven Kruschel, Julian Rosenberger, Patrick Zschech, and Mathias Kraus
This study investigates whether making a machine learning (ML) model's reasoning transparent can help overcome people's natural distrust of algorithms, known as 'algorithm aversion'. Through a user study with 280 participants, researchers examined how transparency interacts with the previously established method of allowing users to adjust an algorithm's predictions.
Problem
People often hesitate to rely on algorithms for decision-making, even when the algorithms are superior to human judgment. While giving users control to adjust algorithmic outputs is known to reduce this aversion, it has been unclear whether making the algorithm's 'thinking process' transparent would also help, or perhaps even be more effective.
Outcome
- Giving users the ability to adjust an algorithm's predictions significantly reduces their reluctance to use it, confirming findings from previous research. - In contrast, simply making the algorithm transparent by showing its decision logic did not have a statistically significant effect on users' willingness to choose the model. - The ability to adjust the model's output (adjustability) appears to be a more powerful tool for encouraging algorithm adoption than transparency alone. - The effects of transparency and adjustability were found to be largely independent of each other, rather than having a combined synergistic effect.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a study that tackles a huge barrier in A.I. adoption: our own distrust of algorithms. The study is titled "Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?". Host: It investigates whether making a machine learning model's reasoning transparent can help overcome that natural hesitation. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. We hear all the time that A.I. can outperform humans at specific tasks, yet people are often reluctant to use it. What’s the core problem this study is addressing? Expert: It's a fascinating psychological phenomenon called 'algorithm aversion'. Even when we know an algorithm is statistically superior, we hesitate to trust it. The study points out a few reasons for this. We have a desire for personal control, we feel algorithms can't handle unique situations, and we are especially sensitive when an algorithm makes a mistake. Host: It’s the classic ‘black box’ problem, right? We don’t know what’s happening inside, so we don’t trust the output. Expert: Exactly. And for years, one popular solution was to give users the ability to slightly adjust or override the algorithm's final answer. This was known to help. But the big question this study asked was: what if we just open the black box? Is making the A.I. transparent even more effective than giving users control? Host: That’s a great question. So how did the researchers test this? Expert: They designed a very clever user study with 280 participants. The task was simple and intuitive: predict the number of rental bikes needed on a given day based on factors like the weather, the temperature, and the time of day. Host: A task where you can see an algorithm being genuinely useful. Expert: Precisely. The participants were split into different groups. Some were given the A.I.'s prediction and had to accept it or leave it. Others were allowed to adjust the A.I.'s prediction slightly. Then, layered on top of that, some participants could see simple charts that explained *how* the algorithm reached its conclusion—that was the transparency. Others just got the final number without any explanation. Host: Okay, a very clean setup. So what did they find? Which was more powerful—control or transparency? Expert: The results were incredibly clear. Giving users the ability to adjust the algorithm's prediction was the game-changer. It significantly reduced their reluctance to use the model, confirming what previous studies had found. Host: So having that little bit of control, that final say, makes all the difference. What about transparency? Did seeing the A.I.'s 'thinking process' help build trust? Expert: This is the most surprising finding. On its own, transparency had no statistically significant effect. People who saw how the algorithm worked were not any more likely to choose to use it than those who didn't. Host: Wow, so showing your work doesn't necessarily win people over. What about combining the two? Did transparency and the ability to adjust the output have a synergistic effect? Expert: You'd think so, but no. The study found the effects were largely independent. Giving users control was powerful, and transparency was not. Putting them together didn't create any extra boost in adoption. Host: This is where it gets really interesting for our listeners. Alex, what does this mean for business leaders? How should this change the way we think about rolling out A.I. tools? Expert: I think there are two major takeaways. First, if your primary goal is user adoption, prioritize features that give your team a sense of control. Don't just build a perfect, unchangeable model. Instead, build a 'human-in-the-loop' system where users can tweak, refine, or even override the A.I.'s suggestions. Host: So, empowerment over explanation, at least for getting people on board. Expert: Exactly. The second takeaway is about rethinking what we mean by 'transparency'. This study suggests that passive transparency—just showing a static chart of the model's logic—isn't enough. People need to see the benefit. Future systems might need more interactive explanations, where a user can ask 'what-if' questions and see how the A.I.'s recommendation changes. It's about engagement, not just a lecture. Host: That makes a lot of sense. It’s the difference between looking at a car engine and actually getting to turn the key. Expert: A perfect analogy. This study really drives home that psychological ownership is key. When people can adjust the output, it becomes *their* decision, aided by the A.I., not a decision made *for them* by a machine. That shift is critical for building trust and encouraging use. Host: Fantastic insights. So, to summarize for our audience: if you want your team to trust and adopt a new algorithm, giving them the power to adjust its recommendations appears far more effective than just showing them how it works. Control is king. Host: Alex, thank you so much for breaking down this important study for us. Expert: My pleasure, Anna. Host: That’s all the time we have for this episode of A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to decode the research that’s shaping our future. Thanks for listening.
Algorithm Aversion, Adjustability, Transparency, Interpretable Machine Learning, Replication Study
International Conference on Wirtschaftsinformatik (2025)
Bridging Mind and Matter: A Taxonomy of Embodied Generative AI
Jan Laufer, Leonardo Banh, Gero Strobel
This study develops a comprehensive classification system, or taxonomy, for Embodied Generative AI—AI that can perceive, reason, and act in physical systems like robots. The taxonomy was created through a systematic literature review and an analysis of 40 real-world examples of this technology. The resulting framework provides a structured way to understand and categorize the various dimensions of AI integrated into physical forms.
Problem
As Generative AI (GenAI) moves from digital content creation to controlling physical agents, there has been a lack of systematic classification and evaluation methods. While many studies focus on specific applications, a clear framework for understanding the core characteristics and capabilities of these embodied AI systems has been missing. This gap makes it difficult for researchers and practitioners to compare, analyze, and optimize emerging applications in fields like robotics and automation.
Outcome
- The study created a detailed taxonomy for Embodied Generative AI to systematically classify its characteristics. - This taxonomy is structured into three main categories (meta-characteristics): Embodiment, Intelligence, and System. - It further breaks down these categories into 16 dimensions and 50 specific characteristics, providing a comprehensive framework for analysis. - The framework serves as a foundational tool for future research and helps businesses and developers make informed decisions when designing or implementing embodied AI systems in areas like service robotics and industrial automation.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're bridging the gap between the digital and physical worlds. We’re diving into a fascinating new study titled "Bridging Mind and Matter: A Taxonomy of Embodied Generative AI." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, in simple terms, what is this study all about? Expert: Hi Anna. This study develops a comprehensive classification system for what’s called Embodied Generative AI. Think of it as AI that doesn't just write an email, but can actually perceive, reason, and act in the physical world through systems like robots or drones. Host: So we're moving from AI on a screen to AI in a machine. That sounds like a huge leap. What's the big problem that prompted this study? Expert: Exactly. The problem is that this field is exploding, but it's a bit like the Wild West. You have countless companies creating these incredible AI-powered robots, but there's no standard language to describe them. Host: What do you mean by no standard language? Expert: Well, one company might call their robot "autonomous," while another uses the same word for a system with completely different capabilities. As the study points out, this "heterogenous field" makes it incredibly difficult for businesses to compare, analyze, and optimize these new technologies. We lack a common framework. Host: So the researchers set out to create that framework. How did they approach such a complex task? Expert: They used a really robust two-step process. First, they did a systematic review of existing academic literature to build an initial draft of the classification system. Expert: But to ensure it was grounded in reality, they then analyzed 40 real-world examples—actual products from companies developing embodied AI. This combination of academic theory and practical application is what makes the final framework so powerful. Host: And what did this framework, or taxonomy, end up looking like? What are the key findings? Expert: The study organizes everything into three main categories, which they call meta-characteristics: Embodiment, Intelligence, and System. Host: Okay, let's break those down. What is Embodiment? Expert: Embodiment is all about the physical form. What does it look like—is it human-like, animal-like, or purely functional, like a factory arm? How does it sense the world? Does it have normal vision, or maybe "superhuman" perception, like the ability to detect a gas leak that a person can't? Host: Got it. The body. So what about the second category, Intelligence? Expert: Intelligence is the "brain." This category answers questions like: How autonomous is it? Can it learn new things, or is its knowledge fixed from pre-training? And where is this brain located? Is the processing done on the robot itself, which is called "on-premise," or is it connecting to a powerful model in the "cloud"? Host: And the final category was System? Expert: Yes, System is about how it all fits together. Does the robot work alone, or does it collaborate with humans or even other AI systems? And, most importantly, what kind of value does it create? Host: That's a great question. What kinds of value did the study identify? Expert: It's not just about efficiency. The framework identifies four types. There's Operational value, like a robot making a warehouse run faster. But there's also Psychological value, from a companion robot, Societal value, like providing public services, and even Aesthetic value, which influences our trust and acceptance of the technology. Host: This is incredibly detailed. But this brings us to the most crucial question for our audience: Why does this matter for business? I'm a leader, why should I care about this taxonomy? Expert: Because it’s a strategic tool for navigating this new frontier. First, for anyone looking to invest in or purchase this technology. You can use this framework as a detailed checklist to compare products from different vendors. You're not just buying a "robot"; you're buying a system with specific, definable characteristics. It ensures you make an informed decision. Host: So it’s a buyer’s guide. What else? Expert: It's also a product developer's blueprint. If you're building a service robot for hotels, this framework structures your entire R&D process. You can systematically define its appearance, its level of autonomy, how it will interact with guests, and whether its intelligence should be an open or closed system. Host: And I imagine it can also help identify new opportunities? Expert: Absolutely. The study's analysis of those 40 real-world systems acts as a market intelligence report. For instance, they found that while most systems have human-like perception, very few have that "superhuman" capability we talked about. For a company in industrial safety or agricultural monitoring, that's a clear market gap waiting to be filled. This taxonomy helps you map the landscape and find your niche. Host: So, to summarize, this study provides a much-needed common language for the rapidly emerging world of physical, embodied AI. It gives businesses a powerful framework to better understand, compare, and strategically build the next generation of intelligent machines. Host: Alex, thank you for making such a complex topic so clear and actionable for us. Expert: My pleasure, Anna. Host: And to our audience, thank you for tuning in to A.I.S. Insights. We'll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Workarounds—A Domain-Specific Modeling Language
Carolin Krabbe, Agnes Aßbrock, Malte Reineke, and Daniel Beverungen
This study introduces a new visual modeling language called Workaround Modeling Notation (WAMN) designed to help organizations identify, analyze, and manage employee workarounds. Using a design science approach, the researchers developed this notation and demonstrated its practical application using a real-world case from a manufacturing company. The goal is to provide a structured method for understanding the complex effects of these informal process deviations.
Problem
Employees often create 'workarounds' to bypass inefficient or problematic standard procedures, but companies lack a systematic way to assess their impact. This makes it difficult to understand the complex chain reactions these workarounds can cause, leading to missed opportunities for innovation and unresolved underlying issues. Without a clear framework, organizations struggle to make consistent decisions about whether to adopt, modify, or prevent these employee-driven solutions.
Outcome
- The primary outcome is the Workaround Modeling Notation (WAMN), a domain-specific modeling language designed to map the causes, actions, and consequences of workarounds. - WAMN enables managers to visualize the entire 'workaround-to-innovation' lifecycle, treating workarounds not just as deviations but as potential bottom-up process improvements. - The notation uses clear visual cues, such as color-coding for positive and negative effects, to help decision-makers quickly assess the risks and benefits of a workaround. - By applying WAMN to a manufacturing case, the study demonstrates its ability to untangle complex interconnections between multiple workarounds and their cascading effects on different organizational levels.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic that happens in every company but is rarely managed well: employee workarounds. We’ll be discussing a fascinating study titled “Workarounds—A Domain-Specific Modeling Language.” Host: To help us unpack it, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study introduces a new visual language to help organizations identify and manage these workarounds. First, Alex, can you set the scene for us? What’s the big problem with workarounds that this study is trying to solve? Expert: Absolutely. The core problem is that companies are flying blind. Employees invent workarounds all the time to get their jobs done, bypassing procedures they see as inefficient. But management often has no systematic way to see what’s happening or to understand the impact. Host: So they’re like invisible, unofficial processes running inside the official ones? Expert: Exactly. And the study points out that these can cause complex chain reactions. A simple shortcut in one department might solve a local problem but create a massive compliance risk or data quality issue somewhere else down the line. Without a clear framework, businesses can't decide if a workaround is a brilliant innovation to be adopted or a dangerous liability to be stopped. Host: That makes sense. You can’t manage what you can’t see. How did the researchers approach creating a solution for this? Expert: They used an approach called Design Science. Instead of just observing the problem, they set out to build a practical tool to solve it. In this case, they designed and developed a brand-new modeling language specifically for visualizing workarounds. Then they tested its applicability using a real-world case from a large manufacturing company. Host: So they built a tool for the job. What was the main outcome? What does this tool, this new language, actually do? Expert: The primary outcome is called the Workaround Modeling Notation, or WAMN for short. Think of it as a visual blueprint for workarounds. It allows a manager to map out the entire story: what caused the workaround, what the employee actually does, and all the consequences that follow. Host: And what makes it so effective? Expert: A few things. First, it treats workarounds not just as deviations, but as potential bottom-up innovations. It reframes the conversation. Second, it uses really clear visual cues. For example, positive effects of a workaround are colored green, and negative effects are red. Host: I like that. It sounds very intuitive. You can see the balance of good and bad immediately. Expert: Precisely. In the manufacturing case they studied, one workaround saved time on the assembly line—a positive, green effect. But it also led to inaccurate inventory records—a negative, red effect. WAMN puts both of those impacts on the same map, making the trade-offs crystal clear and untangling how one workaround can cascade into another. Host: This is the key part for our listeners. Alex, why does this matter for business? What are the practical takeaways for a manager or executive? Expert: This is incredibly practical. First, WAMN gives you a structured way to stop guessing. You can move from anecdotes about workarounds to a data-driven conversation about their true costs and benefits. Host: So it helps you make better decisions. Expert: Yes, and it helps you turn employee creativity into a competitive advantage. That clever shortcut an employee designed might be a brilliant process improvement waiting to be standardized across the company. WAMN provides a path to identify and scale those bottom-up innovations safely. Host: So it’s a tool for both risk management and innovation. Expert: Exactly. It helps you decide whether to adopt, adapt, or prevent a workaround. The study mentions creating a "workaround board"—a dedicated group that uses these visual maps to make informed decisions. It creates a common language for operations, IT, and management to collaborate on improving how work actually gets done. Host: Fantastic. So, to summarize for our audience: companies are filled with employee workarounds that are often invisible and poorly understood. Host: This study created a visual language called WAMN that allows businesses to map these workarounds, clearly see their positive and negative effects, and treat them as a source of potential innovation. Host: Ultimately, it’s about making smarter, more consistent decisions to improve processes from the ground up. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thanks to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another key piece of research for your business.
Workaround, Business Process Management, Domain-Specific Modeling Language, Design Science Research, Process Innovation, Organizational Decision-Making