Proceedings of the 59th Hawaii International Conference on System Sciences (2026)
Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance
Antonia Wurzer, Sophie Hartl, Sandro Franzoi, Jan vom Brocke
This study investigates how regulatory changes, once embedded in a company's information systems, affect the dynamics of business processes. Using digital trace data from a European financial institution's trade order process combined with qualitative interviews, the researchers identified patterns between the implementation of new regulations and changes in process performance indicators.
Problem
In highly regulated industries like finance, organizations must constantly adapt their operations to evolving external regulations. However, there is little understanding of the dynamic, real-world effects that implementing these regulatory changes within IT systems has on the execution and performance of business processes over time.
Outcome
- Implementing regulatory changes in IT systems dynamically affects business processes, causing performance indicators to shift immediately or with a time delay. - Contextual factors, such as employee experience and the quality of training, significantly shape how processes adapt; insufficient training after a change can lead to more errors, process loops, and violations. - Different types of regulations (e.g., content-based vs. function-based) produce distinct impacts, with some streamlining processes and others increasing rework and complexity for employees. - The study highlights the need for businesses to move beyond a static view of compliance and proactively manage the dynamic interplay between regulation, system design, and user behavior.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating study titled "Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance." Host: In short, it explores what really happens to a company's day-to-day operations after a new regulation is coded into its IT systems. With me to break it down is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. Businesses in fields like finance are constantly dealing with new rules. What's the specific problem this study decided to tackle? Expert: The problem is that most companies treat compliance as a finish line. A new regulation comes out, they update their software, and they consider the job done. But they have very little visibility into what happens next. How does that change *actually* affect employees? Does it make their work smoother or more complicated? Does it create hidden risks or inefficiencies? Expert: This study addresses that gap. It looks at the dynamic, real-world ripple effects that these system changes have on business processes over time, which is something organizations have struggled to understand. Host: So it’s about the unintended consequences. How did the researchers go about measuring these ripples? Expert: They used a really clever dual approach. First, they analyzed what's called digital trace data. Think of it as the digital footprint employees leave behind when doing their jobs. They analyzed nearly 17,000 trade order processes from a European financial institution over six months. Expert: But data alone doesn't tell the whole story. So, they combined that quantitative data with qualitative insights—talking to the actual employees, the process owners and business analysts, to understand the context behind the numbers. This let them see not just *what* was happening, but *why*. Host: That combination of data and human insight sounds powerful. What were some of the key findings? Expert: There were three big ones. First, the impact of a change isn't always immediate. Sometimes a system update causes a sudden spike in problems, but other times the negative effects are delayed and pop up weeks later. It's not a simple cause-and-effect. Host: And the second finding? Expert: This one is crucial: the human factor matters immensely. The study found that things like employee experience and, most importantly, the quality of training had a huge impact on how processes adapted. Host: Can you give us an example? Expert: Absolutely. After one regulatory change related to ESG reporting was implemented, the data showed a sharp increase in the number of steps employees took to complete a task, and more process violations. The interviews revealed why: there was no structured training for the change. Employees were confused by a subtly altered interface, which led them to make more errors, repeat steps, and get frustrated. Host: So a small system update, without proper support, can actually hurt productivity. What was the final key finding? Expert: That not all regulatory changes are created equal. The study found that different types of regulations create very different outcomes. A change that automated the generation of a required document actually streamlined the process, making it leaner with fewer reworks. Expert: But in contrast, a change that added new manual tick-boxes for users to fill out increased complexity and rework, because employees found themselves having to go back and complete the new fields repeatedly. Host: This is incredibly practical. Let's move to the most important question for our listeners: why does this matter for their business? What are the key takeaways? Expert: The number one takeaway is to move beyond a static view of compliance. Implementing a change in your IT system isn't the end of the process; it's the beginning. Leaders need to proactively monitor how these changes are affecting workflows on the ground, and this study shows they can use their own system data to do it. Host: So, use your data to see the real impact. What's the next takeaway? Expert: Invest in change management, especially training. You can spend millions on a compliant system, but if you don't prepare your people, you could actually lower efficiency and increase errors. The study provides clear evidence that a lack of training directly leads to process loops and mistakes. A simple, proactive training plan is not a cost—it's an investment against future risk and inefficiency. Host: That’s a powerful point. And the final piece of advice? Expert: Understand the nature of the change before you implement it. Ask your teams: is this update automating a task for our employees, or is it adding a new manual burden? Answering that simple question can help you predict whether the change will be a helpful streamline or a frustrating new bottleneck, and you can plan your support and training accordingly. Host: Fantastic insights. So, to summarize for our listeners: compliance is a dynamic, ongoing process, not a one-time fix. The human factor, especially training, is absolutely critical to success. And finally, understanding the type of regulatory change can help you predict its true impact on your business. Host: Alex Ian Sutherland, thank you for making this complex study so clear and actionable for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable research for your business.
Process Science, Regulation, Change, Business Processes, Digital Trace Data, Dynamics
International Conference on Wirtschaftsinformatik (2025)
Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates
David Blomeyer and Sebastian Köffer
This study examines the supply of entrepreneurial and technical talent from German universities and analyzes their migration patterns after graduation. Using LinkedIn alumni data for 43 universities, the research identifies key locations for talent production and evaluates how effectively different cities and federal states retain or attract these skilled workers.
Problem
Amidst a growing demand for skilled workers, particularly for startups, companies and policymakers lack clear data on talent distribution and mobility in Germany. This information gap makes it difficult to devise effective recruitment strategies, choose business locations, and create policies that foster regional talent retention and economic growth.
Outcome
- Universities in major cities, especially TU München and LMU München, produce the highest number of graduates with entrepreneurial and technical skills. - Talent retention varies significantly by location; universities in major metropolitan areas like Berlin, Munich, and Hamburg are most successful at keeping their graduates locally, with FU Berlin retaining 68.8% of its entrepreneurial alumni. - The tech hotspots of North Rhine-Westphalia (NRW), Bavaria, and Berlin retain an above-average number of their own graduates while also attracting a large share of talent from other regions. - Bavaria is strong in both educating and attracting talent, whereas NRW, the largest producer of talent, also loses a significant number of graduates to other hotspots. - The analysis reveals that hotspot regions are generally better at retaining entrepreneurial profiles than technical profiles, highlighting the influence of local startup ecosystems on talent mobility.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In today's competitive landscape, finding the right talent can make or break a business. But where do you find them? Today, we're diving into a fascinating study titled "Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates." Host: In short, it examines where Germany's top entrepreneurial and tech talent comes from, and more importantly, where it goes after graduation. With me to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What's the real-world problem this study is trying to solve? Expert: The problem is a significant information gap. Germany has a huge demand for skilled workers, especially in STEM fields—we're talking a gap of over 300,000 specialists. Startups, in particular, need this talent to scale. But companies and even regional governments don't have clear data on where these graduates are concentrated and how they move around the country. Host: So they’re flying blind when it comes to recruitment or deciding where to set up a new office? Expert: Exactly. Without this data, it's hard to build effective recruitment strategies or create policies that help a region hold on to the talent it educates. This study gives us a map of Germany's brain circulation for the first time. Host: How did the researchers create this map? What was their approach? Expert: It was quite innovative. They used a massive and publicly available dataset: LinkedIn alumni pages. They analyzed over 2.4 million alumni profiles from 43 major German universities. Host: And how did they identify the specific talent they were looking for? Expert: They created two key profiles. First, the 'Entrepreneurial Profile,' using keywords like Founder, Startup, or Business Development. Second, the 'Technical Profile,' with keywords like IT, Engineering, or Digital. Then, they tracked the current location of these graduates to see who stays, who leaves, and where they go. Host: A digital breadcrumb trail for talent. So, what were the key findings? Where is the talent coming from? Expert: Unsurprisingly, universities in major cities are the biggest producers. The undisputed leader is Munich. The Technical University of Munich, TU München, produces the highest number of both entrepreneurial and technical graduates in the entire country. Host: So Munich is the top talent factory. But the crucial question is, does the talent stay there? Expert: That's where it gets interesting. The study found that talent retention varies massively. Again, the big metropolitan areas—Berlin, Munich, and Hamburg—are the most successful at keeping their graduates. Freie Universität Berlin, for example, retains nearly 69% of its entrepreneurial alumni right there in the city. That's an incredibly high rate. Host: That is high. And what about the bigger picture, at the state level? Are there specific regions that are winning the war for talent? Expert: Yes, the study identifies three clear hotspots: Bavaria, Berlin, and North Rhine-Westphalia, or NRW. They not only retain a high number of their own graduates, but they also act as magnets, pulling in talent from all over Germany. Host: And are these hotspots all the same? Expert: Not at all. Bavaria is a true powerhouse—it's strong in both educating and attracting talent. NRW is the largest producer of skilled graduates, but it also has a "brain drain" problem, losing a lot of its talent to the other two hotspots. And Berlin is a massive talent magnet, with almost half of its entrepreneurial workforce having migrated there from other states. Host: This is all fascinating, Alex, but let's get to the bottom line. Why does this matter for the business professionals listening to our show? Expert: This is a strategic roadmap for businesses. For recruitment, it means you can move beyond simple university rankings. This data tells you where specific talent pools are geographically concentrated. Need experienced engineers? The data points squarely to Munich. Looking for entrepreneurial thinkers? Berlin is a giant hub of attracted, not just homegrown, talent. Host: So it helps companies focus their hiring efforts. What about for bigger decisions, like choosing a business location? Expert: Absolutely. This study helps you understand the dynamics of a regional talent market. Bavaria offers a stable, locally-grown talent pool. Berlin is incredibly dynamic but relies on its power to attract people, which could be vulnerable to competition. A company in NRW needs to know it’s competing directly with Berlin and Munich for its best people. Host: So it's about understanding the long-term sustainability of the local talent pipeline. Expert: Precisely. It also has huge implications for investors and policymakers. It reveals which regions are getting the best return on their educational investments. It shows where to invest to build up a local startup ecosystem that can actually hold on to the bright minds it helps create. Host: So, to sum it up: we now have a much clearer picture of Germany's talent landscape. Universities in big cities are the incubators, but major hotspots like Berlin and Bavaria are the magnets that ultimately attract and retain them. Expert: That's right. It's not just about who has the best universities, but who has the best ecosystem to keep the graduates those universities produce. Host: A crucial insight for any business looking to grow. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in. Join us next time for more on A.I.S. Insights — powered by Living Knowledge.
International Conference on Wirtschaftsinformatik (2025)
Towards the Acceptance of Virtual Reality Technology for Cyclists
Sophia Elsholz, Paul Neumeyer, and Rüdiger Zarnekow
This study investigates the factors that influence cyclists' willingness to adopt virtual reality (VR) for indoor training. Using a survey of 314 recreational and competitive cyclists, the research applies an extended Technology Acceptance Model (TAM) to determine what makes VR appealing for platforms like Zwift.
Problem
While digital indoor cycling platforms exist, they lack the full immersion that VR can offer. However, it is unclear whether cyclists would actually accept and use VR technology, as its potential in sports remains largely theoretical and the specific factors driving adoption in cycling are unknown.
Outcome
- Perceived enjoyment is the single most important factor determining if a cyclist will adopt VR for training. - Perceived usefulness, or the belief that VR will improve training performance, is also a strong predictor of acceptance. - Surprisingly, the perceived ease of use of the VR technology did not significantly influence a cyclist's intention to use it. - Social factors, such as the opinions of other athletes and trainers, along with a cyclist's general openness to new technology, positively contribute to their acceptance of VR. - Both recreational and competitive cyclists showed similar levels of acceptance, indicating a broad potential market, but both groups are currently skeptical about VR's ability to improve performance.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge with real-world business strategy. I'm your host, Anna Ivy Summers. Host: Today, we're gearing up to talk about the intersection of fitness and immersive technology. We're diving into a fascinating study called "Towards the Acceptance of Virtual Reality Technology for Cyclists." Host: It explores what makes cyclists, both amateur and pro, willing to adopt VR for their indoor training routines. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. People are already using platforms like Zwift for indoor cycling. What's the problem this study is trying to solve? Expert: That's the perfect place to start. Those platforms are popular, but they're still fundamentally a 2D screen experience. The big problem is that while VR promises a much more immersive, realistic training session, its potential in sports is still largely theoretical. Expert: Companies are hesitant to invest millions in developing VR cycling apps because they simply don't know if cyclists will actually use them. We need to understand the 'why' behind adoption before the 'what' gets built. Host: So it’s about closing that gap between a cool idea and a viable product. How did the researchers go about figuring out what cyclists want? Expert: They took a very methodical approach. They conducted a detailed survey with 314 cyclists, ranging from recreational riders to competitive athletes. Expert: They used a framework called the Technology Acceptance Model, or TAM, which they extended for this specific purpose. Essentially, it's a way to measure the key psychological factors that make someone decide to use a new piece of tech. Expert: They didn't just look at whether it's useful or easy to use. They also measured the impact of perceived enjoyment, a cyclist's general openness to new tech, and even social pressure from trainers and other athletes. Host: And after surveying all those cyclists, what were the most surprising findings? Expert: There were a few real eye-openers. First and foremost, the single most important factor for adoption wasn't performance gains—it was perceived enjoyment. Host: You mean, it has to be fun? More so than effective? Expert: Exactly. The data shows that if the experience isn't fun, cyclists won't be interested. This suggests they see VR cycling as a 'hedonic' system—one used for enjoyment—rather than a purely utilitarian training tool. Usefulness was the second biggest factor, but fun came first. Host: That is interesting. What else stood out? Expert: The biggest surprise was what *didn't* matter. The perceived ease of use of the VR technology had no significant direct impact on a cyclist's intention to adopt it. Host: So, they don't mind if it's a bit complicated to set up, as long as the experience is worth it? Expert: Precisely. They're willing to overcome a technical hurdle if the payoff in enjoyment and usefulness is there. The study also confirmed that social factors are key—what your teammates and coach think about the tech really does influence your willingness to try it. Host: This is where it gets critical for our listeners. Alex, what does this all mean for business? What are the key takeaways for a company in the fitness tech space? Expert: This study provides a clear roadmap. The first takeaway is: lead with fun. Your marketing, your design, your user experience—it all has to be built around creating an engaging and enjoyable world. Forget sterile lab simulations; think gamified adventures. Host: So sell the experience, not just the specs. Expert: Exactly. The second takeaway addresses the usefulness problem. The study found that cyclists are currently skeptical that VR can actually improve their performance. So, a business needs to explicitly educate the market. Expert: This means developing and promoting features that offer clear performance benefits you can't get elsewhere—like real-time feedback on your pedaling technique or the ability to practice a specific, difficult segment of a real-world race course in VR. Host: That sounds like a powerful marketing angle. You're not just riding; you're gaining a competitive edge. Expert: It is. And the final key takeaway is to leverage the community. Since social norms are so influential, businesses should target teams, clubs, and coaches. A positive review from a respected trainer could be more valuable than a massive ad campaign. Build community features that encourage social interaction and friendly competition. Host: Fantastic insights, Alex. So, to summarize for our business leaders: to succeed in the VR cycling market, the winning formula is to first make it fun, then prove it makes you faster, and finally, empower the community to spread the word. Expert: You've got it. It's about balancing the enjoyment with tangible, marketable benefits. Host: Thank you so much for breaking that down for us, Alex. It's clear that understanding the user is the first and most important lap in this race. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more actionable insights from the world of research.
Technology Acceptance, TAM, Cycling, Extended Reality, XR
International Conference on Wirtschaftsinformatik (2025)
Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry
Bastian Brechtelsbauer
This study details the design of a system to monitor organizational change projects, using insights from an action design research project with two large German manufacturing companies. The methodology involved developing and evaluating a prototype system, which includes a questionnaire-based survey and an interactive dashboard for data visualization and analysis.
Problem
Effectively managing organizational change is crucial for company survival, yet it is notoriously difficult to track and oversee. There is a significant research gap and lack of practical guidance on how to design information technology systems that can successfully monitor change projects to improve transparency and support decision-making for managers.
Outcome
- Developed a prototype change project monitoring system consisting of surveys and an interactive dashboard to track key indicators like change readiness, acceptance, and implementation. - Identified four key design challenges: balancing user effort vs. insight depth, managing standardization vs. adaptability, creating a realistic understanding of data quantification, and establishing a shared vision for the tool. - Proposed three generalized requirements for change monitoring systems: they must provide information tailored to different user groups, be usable for various types of change projects, and conserve scarce resources during organizational change. - Outlined eight design principles to guide development, focusing on both the system's features (e.g., modularity, intuitive visualizations) and the design process (e.g., involving stakeholders, communicating a clear vision).
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating new study titled "Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry". It explores how to build better tools to keep track of major organizational change. With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let’s start with the big picture. We all know companies are constantly changing, but why is monitoring that change such a critical problem to solve right now?
Expert: It's a huge issue. Think about the pressures on a major industry like German manufacturing, which this study focuses on. They're dealing with digital transformation, new sustainability goals, and intense global competition. Thriving, or even just surviving, means constant adaptation.
Host: And that adaptation is managed through change projects.
Expert: Exactly. Projects like restructuring departments, adopting new technologies, or shifting the entire company culture. The problem is, these are incredibly complex and expensive, yet managers often lack a clear, real-time view of what’s actually happening on the ground. They’re trying to navigate a storm without a compass.
Host: So they’re relying on gut feeling rather than data.
Expert: For the most part, yes. There's been a real lack of practical guidance on how to design an IT system that can properly monitor these projects, track employee sentiment, and give leaders the data they need to make better decisions. This study aimed to fill that gap.
Host: How did the researchers approach such a complex problem? What was their method?
Expert: Well, this wasn't a purely theoretical exercise. The researchers took a hands-on approach. They partnered directly with two large German manufacturing companies to co-develop a prototype system from the ground up.
Host: So they built something real and tested it?
Expert: Precisely. They created a system that has two main parts. First, a series of questionnaires to regularly survey employees about the change project—things like their readiness for the change, how well they feel supported, and their overall acceptance. Second, they built an interactive dashboard that visualizes all that survey data, so managers can see trends and drill down into specific areas or departments.
Host: That sounds incredibly useful. What were the key findings after they developed this prototype?
Expert: The first finding is that this type of system can work and provide immense value. But the second, and perhaps more interesting finding, was about the challenges they faced in designing it. It's not as simple as just building a dashboard.
Host: What kind of challenges?
Expert: They identified four main ones. First was balancing user effort against the depth of insight. You want detailed data, but you can’t overwhelm employees with constant, lengthy surveys.
Host: That makes sense. What else?
Expert: Second, managing standardization versus adaptability. For the data to be comparable across the company, you need a standard tool. But every change project is unique and needs some flexibility. Finding that balance is tricky.
Host: So it's a constant trade-off.
Expert: It is. The other two challenges were more human-centric. They had to create a realistic understanding of what the data could actually represent—quantification isn’t a magic wand for complex social processes. And finally, they had to establish a shared vision for what the tool was for, to avoid confusion or resistance from users.
Host: Which brings us to the most important question, Alex. Why does this matter for business leaders listening today? What are the practical takeaways?
Expert: The biggest takeaway is that you can and should move from guesswork to data-informed decision-making in change management. This study provides a practical blueprint for how to do that. You can get a real pulse on your organization during its most critical moments.
Host: And it seems the lesson is that the tool itself is only half the battle.
Expert: Absolutely. The second key takeaway is that the design *process* is crucial. You have to treat the implementation of a monitoring system as a change project in its own right. That means involving stakeholders from all levels, communicating a clear vision for the tool, and being upfront about its limitations.
Host: You mentioned the importance of balance and trade-offs. How should a leader think about that?
Expert: That’s the third takeaway. Leaders must be willing to make conscious trade-offs. There is no perfect, one-size-fits-all solution. You have to decide what matters most for your organization: Is it ease of use, or is it granular data? Is company-wide standardization more important than project-specific flexibility? This study shows that acknowledging and navigating these trade-offs is central to success.
Host: So, Alex, to sum up, it sounds like while change is difficult, we now have a much clearer path to actually measuring and managing it effectively.
Expert: That's right. These new monitoring systems, combining simple surveys with powerful dashboards, can offer the transparency that leaders have been missing. But success hinges on a thoughtful design process that balances technology with the very human elements of change.
Host: A fantastic insight. Thank you so much for breaking that down for us, Alex.
Expert: My pleasure, Anna.
Host: And thank you to our listeners for tuning in. For A.I.S. Insights — powered by Living Knowledge, I’m Anna Ivy Summers.
Change Management, Monitoring, Action Design Research, Design Science, Industry
International Conference on Wirtschaftsinformatik (2025)
Bias Measurement in Chat-optimized LLM Models for Spanish and English
Ligia Amparo Vergara Brunal, Diana Hristova, and Markus Schaal
This study develops and applies a method to evaluate social biases in advanced AI language models (LLMs) for both English and Spanish. Researchers tested three state-of-the-art models on two datasets designed to expose stereotypical thinking, comparing performance across languages and contexts.
Problem
As AI language models are increasingly used for critical decisions in areas like healthcare and human resources, there's a risk they could spread harmful social biases. While bias in English AI has been extensively studied, there is a significant lack of research on how these biases manifest in other widely spoken languages, such as Spanish.
Outcome
- Models were generally worse at identifying and refusing to answer biased questions in Spanish compared to English. - However, when the models did provide an answer to a biased prompt, their responses were often fairer (less stereotypical) in Spanish. - Models provided fairer answers when the questions were direct and unambiguous, as opposed to indirect or vague.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we break down complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study called "Bias Measurement in Chat-optimized LLM Models for Spanish and English." Host: It explores how social biases show up in advanced AI, not just in English, but also in Spanish, and the results are quite surprising. Here to walk us through it is our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Thanks for having me, Anna. It's a really important topic. Host: Absolutely. So, let’s start with the big picture. We hear a lot about AI bias, but why does this study, with its focus on Spanish, really matter for businesses today? Expert: It matters because businesses are going global with AI. These models are being used in incredibly sensitive areas—like screening résumés in HR, supporting doctors in healthcare, or powering customer service bots. Expert: The problem is, most of the safety research and bias testing has been focused on English. This study addresses a huge blind spot: how do these models behave in other major world languages, like Spanish? If the AI is biased, it could lead to discriminatory hiring, unequal service, and significant legal risk for a global company. Host: That makes perfect sense. You can’t just assume the safety features work the same everywhere. So how did the researchers actually measure this bias? Expert: They took a very systematic approach. They used datasets filled with questions designed to trigger stereotypes. These questions were presented in two ways: some were ambiguous, where there wasn't enough information for a clear answer, and others were direct and unambiguous. Expert: Then, they fed these prompts to three leading AI models in both English and Spanish. They analyzed every response to see if the model would give a biased answer, a fair one, or if it would identify the tricky nature of the question and refuse to answer at all. Host: A kind of stress test for AI fairness. I'm curious, what were the key findings from this test? Expert: There were a few real surprises. First, the models were generally worse at identifying and refusing to answer biased questions in Spanish. In English, they were more cautious, but in Spanish, they were more likely to just give an answer, even to a problematic prompt. Host: So they have fewer guardrails in Spanish? Expert: Exactly. But here’s the paradox, and this was the second key finding. When the models *did* provide an answer to a biased prompt, their responses were often fairer and less stereotypical in Spanish than they were in English. Host: Wait, that’s completely counterintuitive. Less cautious, but more fair? How can that be? Expert: It's a fascinating trade-off. The study suggests that the intense safety tuning for English models makes them very cautious, but when they do slip up, the bias can be strong. The Spanish models, while less guarded, seemed to fall back on less stereotypical patterns when forced to answer. Host: And was there a third major finding? Expert: Yes, and it’s a very practical one. The models provided much fairer answers across both languages when the questions were direct and unambiguous. When prompts were vague or indirect, that's where the stereotypes and biases were most likely to creep in. Host: This is where it gets critical for our audience. Alex, what are the actionable takeaways for business leaders using AI in a global market? Expert: This is the most important part. First, you cannot assume your AI’s English safety protocols will work in other languages. If you're deploying a chatbot for global customer service or an HR tool in different countries, you must test and validate its performance and fairness in every single language. Host: So, no cutting corners on multilingual testing. What’s the second takeaway? Expert: It’s all about how you talk to the AI. That finding about direct questions is a lesson in prompt engineering. Businesses need to train their teams to be specific and unambiguous when using these tools. A clear, direct instruction is your best defense against getting a biased or nonsensical output. Vagueness is the enemy. Host: That's a great point. Clarity is a risk mitigation tool. Any final thoughts for companies looking to procure AI technology? Expert: Yes. This study highlights a clear market gap. As a business, you should be asking your AI vendors hard questions. What are you doing to measure and mitigate bias in Spanish, French, or Mandarin? Don't just settle for English-centric safety claims. Demand models that are proven to be fair and reliable for your global customer base. Host: Powerful advice. So, to summarize: AI bias is not a monolith; it behaves differently across languages, with strange trade-offs between caution and fairness. Host: For businesses, the message is clear: test your AI tools in every market, train your people to write clear and direct prompts, and hold your technology partners accountable for true global performance. Host: Alex, thank you for breaking this down for us with such clarity. Expert: My pleasure, Anna. Host: And a big thank you to our listeners for tuning in to A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
LLM, bias, multilingual, Spanish, AI ethics, fairness
International Conference on Wirtschaftsinformatik (2025)
Algorithmic Management: An MCDA-Based Comparison of Key Approaches
Arne Jeppe, Tim Brée, and Erik Karger
This study employs Multi-Criteria Decision Analysis (MCDA) to evaluate and compare four distinct approaches for governing algorithmic management systems: principle-based, rule-based, risk-based, and auditing-based. The research gathered preferences from 27 experts regarding each approach's effectiveness, feasibility, adaptability, and stakeholder acceptability to determine the most preferred strategy.
Problem
As organizations increasingly use algorithms to manage workers, they face the challenge of governing these systems to ensure fairness, transparency, and accountability. While several governance models have been proposed conceptually, there is a significant research gap regarding which approach is empirically preferred by experts and most practical for balancing innovation with responsible implementation.
Outcome
- Experts consistently and strongly preferred a hybrid, risk-based approach for governing algorithmic management systems. - This approach was perceived as the most effective in mitigating risks (like bias and privacy violations) while also demonstrating good adaptability to new technologies and high stakeholder acceptability. - The findings suggest that a 'one-size-fits-all' strategy is ineffective; instead, a pragmatic approach that tailors the intensity of governance to the level of potential harm is most suitable. - Purely rule-based approaches were seen as too rigid and slow to adapt, while purely principle-based approaches were considered difficult to enforce.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. Host: Today we're diving into a fascinating study called "Algorithmic Management: An MCDA-Based Comparison of Key Approaches". Host: It’s all about figuring out the best way for companies to govern the AI systems they use to manage their employees. Host: The researchers evaluated four different strategies to see which one experts prefer for managing these complex systems. I'm joined by our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. More and more, algorithms are making decisions that used to be made by human managers—assigning tasks, monitoring performance, even hiring. What’s the core problem businesses are facing with this shift? Expert: The core problem is governance. As companies rely more on these powerful tools, they're struggling to ensure the systems are fair, transparent, and accountable. Expert: As the study points out, while algorithms can boost efficiency, they also raise serious concerns about worker autonomy, fairness, and the "black box" problem, where no one understands why an algorithm made a certain decision. Host: So it's a balancing act? Companies want the benefits of AI without the ethical and legal risks? Expert: Exactly. The study highlights that while many conceptual models for governance exist, there's been a real gap in understanding which approach is actually the most practical and effective. That’s what this research set out to discover. Host: How did the researchers tackle this? How do you test which governance model is "best"? Expert: They used a method called Multi-Criteria Decision Analysis, or MCDA. In simple terms, they identified four distinct models: a high-level Principle-Based approach, a strict Rule-Based approach, an industry-led Auditing-Based approach, and finally, a hybrid Risk-Based approach. Expert: They then gathered a panel of 27 experts from academia, industry, and government. These experts scored each approach against key criteria: its effectiveness, its feasibility to implement, its adaptability to new technology, and its acceptability to stakeholders. Host: So they're essentially using the collective wisdom of experts to find the most balanced solution. Expert: Precisely. It moves the conversation from a purely theoretical debate to one based on structured, evidence-based preferences from people in the field. Host: And what did this expert panel conclude? Was there a clear winner? Expert: There was, and it was quite decisive. The experts consistently and strongly preferred the hybrid, risk-based approach. The data shows it was ranked first by 21 of the 27 experts. Host: Why was that approach so popular? Expert: It was seen as the pragmatic sweet spot. The study shows it was rated highest for effectiveness in mitigating risks like bias or privacy violations, but it also scored very well on adaptability and stakeholder acceptability. It’s a practical middle ground. Host: What about the other approaches? What were their weaknesses? Expert: The study revealed clear trade-offs. The purely rule-based approach, with its strict regulations, was seen as too rigid and slow. It scored lowest on adaptability. Expert: On the other hand, the principle-based approach was rated as highly adaptable, but experts worried it was too abstract and difficult to actually enforce. In fact, it scored lowest on feasibility. Host: So the big message is that a one-size-fits-all strategy doesn't work. Expert: That's the crucial point. The findings strongly suggest that the best strategy is one that tailors the intensity of governance to the level of potential harm. Host: Alex, this is the key question for our listeners. What does a "risk-based approach" actually look like in practice for a business leader? Expert: It means you don't treat all your algorithms the same. The study gives a great example from a logistics company. An algorithm that simply optimizes delivery routes is low-risk. For that, your governance can be lighter, focusing on efficiency principles and basic monitoring. Expert: But an algorithm that has the autonomy to deactivate a driver's account based on performance metrics? That's extremely high-risk. Host: So what kind of extra controls would be needed for that high-risk system? Expert: The risk-based approach would demand much stricter controls. Things like mandatory human oversight for the final decision, regular audits for bias, full transparency for the driver on how the system works, and a clear, accessible process for them to appeal the decision. Host: So it's about being strategic. It allows companies to innovate with low-risk AI without getting bogged down, while putting strong guardrails around the most impactful decisions. Expert: Exactly. It's a practical roadmap for responsible innovation. It helps businesses avoid the trap of being too rigid, which stifles progress, or too vague, which invites ethical and legal trouble. Host: So, to sum up: as businesses use AI to manage people, the challenge is how to govern it responsibly. Host: This study shows that experts don't want rigid rules or vague principles. They strongly prefer a hybrid, risk-based approach. Host: This means classifying algorithmic systems by their potential for harm and tailoring governance accordingly—lighter for low-risk, and much stricter for high-risk applications. Host: It’s a pragmatic path forward for balancing innovation with accountability. Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we translate living knowledge into business impact.
International Conference on Wirtschaftsinformatik (2025)
AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework
Arnold F. Arz von Straussenburg, Jens J. Marga, Timon T. Aldenhoff, and Dennis M. Riehle
This study proposes a design theory to safely and ethically integrate Artificial Intelligence (AI) agents into the governance of data trusts. The paper introduces a normative framework that unifies fiduciary principles, institutional trust, and AI ethics. It puts forward four specific design principles to guide the development of AI systems that can act as responsible governance actors within these trusts, ensuring they protect beneficiaries' interests.
Problem
Data trusts are frameworks for responsible data management, but integrating powerful AI systems creates significant ethical and security challenges. AI can be opaque and may have goals that conflict with the interests of data owners, undermining the fairness and accountability that data trusts are designed to protect. This creates a critical need for a governance model that allows organizations to leverage AI's benefits without compromising their fundamental duties to data owners.
Outcome
- The paper establishes a framework to guide the integration of AI into data trusts, ensuring AI actions align with ethical and fiduciary responsibilities. - It introduces four key design principles for AI agents: 1) Fiduciary alignment to prioritize beneficiary interests, 2) Accountability through complete traceability and oversight, 3) Transparent explainability for all AI decisions, and 4) Autonomy-preserving oversight to maintain robust human supervision. - The research demonstrates that AI can enhance efficiency in data governance without eroding stakeholder trust or ethical standards if implemented correctly. - It provides actionable recommendations, such as automated audits and dynamic consent mechanisms, to ensure the responsible use of AI within data ecosystems for the common good.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re exploring a critical challenge at the intersection of data and artificial intelligence. We’ll be discussing a new study titled "AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework." Host: In essence, the study proposes a new way to safely and ethically integrate AI into the governance of data trusts, which are frameworks designed to manage data responsibly on behalf of others. Host: With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Why is integrating AI into these data trusts such a significant problem for businesses? Expert: Well Anna, organizations are increasingly using data trusts to build confidence with their customers and partners. They’re a promise of responsible data management. But when you introduce powerful AI, you introduce risk. Expert: The study highlights that many AI systems are like "black boxes." We don't always know how they make decisions. This opacity can clash with the core duties of a data trust, which are based on loyalty and transparency. Expert: The fundamental problem is a tension between the efficiency AI offers and the accountability that a trust demands. You could have an AI that's optimizing for a business goal that isn't perfectly aligned with the interests of the people who provided the data, and that's a serious ethical and legal breach. Host: So how did the researchers approach solving this high-stakes problem? Expert: They took a design-focused approach. Instead of just theorizing, they developed a concrete framework by synthesizing insights from three distinct fields: the legal principles of fiduciary duty, the organizational science of institutional trust, and the core tenets of AI ethics. Expert: This allowed them to build a practical blueprint that translates these high-level ethical goals into actionable design principles for building AI systems. Host: And what were the main findings? What does this blueprint actually look like? Expert: The study outcome is a set of four clear design principles for any AI agent operating within a data trust. Think of them as the pillars for building trustworthy AI governance. Expert: The first is **Fiduciary Alignment**. This means the AI must be explicitly designed to prioritize the interests of the data owners, or beneficiaries, above all else. Its goals have to be their goals. Expert: Second is **Accountability through Traceability**. Since an AI can't be held legally responsible, every action it takes must be recorded in an unchangeable log. This creates a complete audit trail, so a human is always accountable. Host: So you can always trace a decision back to its source and understand the context. Expert: Exactly. The third principle builds on that: **Transparent Explainability**. The AI's decisions can't be a mystery. Stakeholders must be able to see and understand, in simple terms, why a decision was made. The study suggests things like real-time transparency dashboards. Expert: And finally, the fourth principle is **Autonomy-Preserving Oversight**. This is crucial. It means humans must always have the final say. Data owners should have dynamic control over their consent, not just a one-time checkbox, and human trustees must always have the power to override the AI. Host: This all sounds incredibly robust. But let's get to the bottom line for our listeners. Why does this matter for business leaders? What are the practical takeaways? Expert: This is the most important part. For businesses, this framework is essentially a roadmap for de-risking AI adoption in data-sensitive areas. Following these principles helps you build genuine, provable trust with your customers. Expert: In a competitive market, being the company that can demonstrate truly responsible AI governance is a massive advantage. It moves trust from a vague promise to a verifiable feature of your service. Expert: The study also provides actionable ideas. Businesses can start implementing dynamic consent portals where users can actively manage how their data is used by AI. They can build automated audit systems that flag any AI behavior that deviates from policy, ensuring a human is always in the loop for critical decisions. Expert: Ultimately, adopting a framework like this is about future-proofing your business. Data regulations are only getting stricter. Building this ethical and accountable foundation now isn't just about compliance; it's about leading the way and building a sustainable, trust-based relationship with your market. Host: So, to summarize, the challenge is using powerful AI in data trusts without eroding the very foundation of trust they stand on. Host: This study offers a solution through four design principles: ensuring the AI is aligned with beneficiary interests, making it fully accountable and traceable, keeping it transparent, and, most importantly, always preserving meaningful human oversight. Host: Alex, thank you for breaking down this complex and vital topic for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Data Trusts, Normative Framework, AI Governance, Fairness, AI Agents
International Conference on Wirtschaftsinformatik (2025)
Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective
Lukas Grützner, Moritz Goldmann, Michael H. Breitner
This study empirically assesses the impact of Generative AI (GenAI) on the social aspects of business-IT collaboration. Using a literature review, an expert survey, and statistical modeling, the research explores how GenAI influences communication, mutual understanding, and knowledge sharing between business and technology departments.
Problem
While aligning IT with business strategy is crucial for organizational success, the social dimension of this alignment—how people communicate and collaborate—is often underexplored. With the rapid integration of GenAI into workplaces, there is a significant research gap concerning how these new tools reshape the critical human interactions between business and IT teams.
Outcome
- GenAI significantly improves formal business-IT collaboration by enhancing structured knowledge sharing, promoting the use of a common language, and increasing formal interactions. - The technology helps bridge knowledge gaps by making technical information more accessible to business leaders and business context clearer to IT leaders. - GenAI has no significant impact on informal social interactions, such as networking and trust-building, which remain dependent on human-driven leadership and engagement. - Management must strategically integrate GenAI to leverage its benefits for formal communication while actively fostering an environment that supports crucial interpersonal collaboration.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and human ingenuity, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into how Generative AI is changing one of the most critical relationships in any company: the collaboration between business and IT departments. Host: We’re exploring a fascinating study titled "Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective". It empirically assesses how tools like ChatGPT are influencing communication, mutual understanding, and knowledge sharing between these essential teams. Host: And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Getting business and IT teams on the same page has always been a challenge, but why is this 'social alignment', as the study calls it, so critical right now? Expert: It’s critical because technical integration isn't enough for success. Social alignment is about the human element—the relationships, shared values, and mutual understanding between business and IT leaders. Expert: Without it, organizations see reduced benefits from their tech investments and lose strategic agility. With GenAI entering the workplace so rapidly, there's been a huge question mark over whether these tools help or hinder those crucial human connections. Host: So there's a real gap in our understanding. How did the researchers go about measuring something as intangible as human collaboration? Expert: They used a really robust, three-part approach. First, they conducted an extensive literature review to build a solid theoretical foundation. Then, they surveyed 61 senior executives from both business and IT across multiple countries to get real-world data. Expert: Finally, they used a sophisticated statistical model to analyze those survey responses, allowing them to pinpoint the specific ways GenAI usage impacts collaboration. Host: That sounds very thorough. Let's get to the results. What did they find? Expert: The findings were fascinating, primarily because of the distinction they revealed. The study found that GenAI significantly improves *formal* collaboration. Host: What do you mean by formal collaboration in this context? Expert: Think of the structured parts of work. GenAI excels at enhancing structured knowledge sharing, creating standardized reports, and helping to establish a common language between departments. For instance, it can translate complex technical specs into a simple summary for a business leader. Host: So it helps with the official processes. What about the other side of the coin? Expert: That's the most important finding. The study showed that GenAI has no significant impact on *informal* social interactions. These are the human-driven activities like networking, building trust over lunch, or spontaneous chats in the hallway that often lead to breakthroughs. Those remain entirely dependent on human leadership and engagement. Host: So GenAI is a tool for structure, but not a replacement for relationships. Did the study find it helps bridge the knowledge gap between these teams? Expert: Absolutely. This was another major outcome. GenAI acts as a kind of universal translator. It makes technical information more accessible to business people and, in reverse, it makes business context and strategy clearer to IT leaders. It effectively helps create a shared understanding where one might not have existed before. Host: This is incredibly relevant for anyone in management. Alex, let’s bring it all home. If I'm a business leader listening now, what is the key takeaway? What should I do differently on Monday? Expert: The biggest takeaway is to be strategic. Don’t just deploy GenAI and hope for the best. The study suggests you should use these tools to streamline your formal communication channels—think AI-assisted meeting summaries, project documentation, and internal knowledge bases. This frees up valuable time. Host: And what about the informal side you mentioned? Expert: This is the crucial part. While you're automating the formal stuff, you must actively double down on fostering human-to-human interaction. The study makes it clear that trust and strong working relationships don’t happen by accident. Leaders need to consciously create opportunities for that interpersonal connection, because the AI won't do it for you. Host: So it’s a 'best of both worlds' approach. Use AI to create efficiency in structured tasks, which then gives leaders more time and space to focus on culture and true human collaboration. Expert: Exactly. It’s about leveraging technology to empower people, not replace the connections between them. Host: A powerful conclusion. To recap for our listeners: this study shows that Generative AI is a fantastic tool for improving the formal, structured side of business-IT collaboration, helping to bridge knowledge gaps and create a common language. Host: However, it doesn’t affect the informal, human-to-human interactions that build trust and culture. The key for business leaders is to implement AI strategically for efficiency, while actively nurturing the interpersonal connections that truly drive success. Host: Alex Ian Sutherland, thank you for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Information systems alignment, social, GenAI, PLS-SEM
International Conference on Wirtschaftsinformatik (2025)
Exploring the Design of Augmented Reality for Fostering Flow in Running: A Design Science Study
Julia Pham, Sandra Birnstiel, Benedikt Morschheuser
This study explores how to design Augmented Reality (AR) interfaces for sport glasses to help runners achieve a state of 'flow,' or peak performance. Using a Design Science Research approach, the researchers developed and evaluated an AR prototype over two iterative design cycles, gathering feedback from nine runners through field tests and interviews to derive design recommendations.
Problem
Runners often struggle to achieve and maintain a state of flow due to the difficulty of monitoring performance without disrupting their rhythm, especially in dynamic outdoor environments. While AR glasses offer a potential solution by providing hands-free feedback, there is a significant research gap on how to design effective, non-intrusive interfaces that support, rather than hinder, this immersive state.
Outcome
- AR interfaces can help runners achieve flow by providing continuous, non-intrusive feedback directly in their field of view, fulfilling the need for clear goals and unambiguous feedback. - Non-numeric visual cues, such as expanding circles or color-coded warnings, are more effective than raw numbers for conveying performance data without causing cognitive overload. - Effective AR design for running must be adaptive and customizable, allowing users to choose the metrics they see and control when the display is active to match personal goals and minimize distractions. - The study produced four key design recommendations: provide easily interpretable feedback beyond numbers, ensure a seamless and embodied interaction, allow user customization, and use a curiosity-inducing design to maintain engagement.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re looking at how technology can help us achieve that elusive state of peak performance, often called 'flow'. We’re diving into a fascinating study titled "Exploring the Design of Augmented Reality for Fostering Flow in Running." Essentially, it explores how to design AR interfaces for sport glasses to help runners get, and stay, in the zone. Here to break it down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: So, Alex, let's start with the big picture. Most serious runners I know use a smartwatch. What's the problem this study is trying to solve that a watch doesn't already?
Expert: That's the perfect question. The problem is disruption. To get into a state of flow, you need focus. But to check your pace or heart rate on a watch, you have to break your form, look down, and interact with a device. That single action can pull you right out of your rhythm.
Host: It completely breaks your concentration.
Expert: Exactly. And AR sport glasses offer a hands-free solution by putting data directly in your field of view. But that creates a new challenge: how do you show that information without it becoming just another distraction? That’s the critical design gap this study tackles.
Host: So how did the researchers approach this? It sounds tricky to get right.
Expert: They used a very practical, hands-on method called Design Science Research. They didn't just theorize; they built and tested. They took a pair of commercially available AR glasses and designed an interface. Then, they had nine real runners use the prototype on their actual training routes.
Host: And they got feedback?
Expert: Yes, in two distinct cycles. The first design was very basic—it just showed the runner's heart rate as a number. After getting feedback, they created a second, more advanced version based on what the runners said they needed. This iterative process of build, test, and refine is key.
Host: I'm curious what they found. Did the second version work better?
Expert: It worked much better. And this leads to one of the biggest findings: for high-focus activities, non-numeric visual cues are far more effective than raw numbers.
Host: What does that mean in practice? What did the runners see?
Expert: Instead of just a number, the improved design used a rotating circle that would expand as the runner approached their target heart rate, and then fade away once they were in the zone to minimize distraction. It also used a simple red frame as a warning if their heart rate got too high. It’s about making the data interpretable at a glance, without conscious thought.
Host: So it becomes more of a feeling than a number you have to process. What else stood out?
Expert: Customization was absolutely critical. The study found that a one-size-fits-all approach fails because runners have different goals. Some want to track pace, others heart rate. Experienced runners might prefer minimal data, relying more on how their body feels, while beginners want more constant guidance.
Host: And the AR interface needed to adapt to that.
Expert: Precisely. The system needs to be adaptive, allowing users to choose their metrics and even turn the display off completely with a simple button press. Giving the user that control is essential to supporting flow, not breaking it.
Host: This is all very interesting for the fitness tech world, but let's broaden it out for our business audience. Why does a study about runners and AR matter for, say, a logistics manager or a software developer?
Expert: Because this is a masterclass in effective user interface design for any high-concentration task. The core principle—reducing cognitive load—is universal. Think about a technician repairing complex machinery using AR instructions. You don’t want them distracted by dense text; you want simple, intuitive visual cues, just like the expanding circle for the runner.
Host: So this is about the future of how we interact with information in any professional setting.
Expert: Absolutely. The second big takeaway for business is the power of deep personalization. This study shows that to create a truly valuable product, you have to allow users to tailor the experience to their specific goals and expertise level. This isn't just about changing the color scheme; it's about fundamentally altering the information and interface based on the user's context.
Host: And are there other applications that come to mind?
Expert: Definitely. Think of heads-up displays for pilots or surgeons. In those fields, providing critical data without causing distraction can be a matter of life and death. This study provides a blueprint for what the researchers call "embodied interaction," where the technology feels like a seamless extension of the user, not a separate tool they have to consciously operate. That is the holy grail for a huge range of industries.
Host: So, to summarize: the future of effective digital interfaces, especially in AR, isn't about throwing more data at people. It's about presenting the right information, in the most intuitive way possible, and giving the user ultimate control.
Expert: You've got it. It’s about designing for flow, whether you're on a 10k run or a factory floor.
Host: A powerful insight into a future that’s coming faster than we think. Alex Ian Sutherland, thank you so much for your analysis today.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning into A.I.S. Insights. Join us next time as we continue to connect research with reality.
International Conference on Wirtschaftsinformatik (2025)
Workarounds—A Domain-Specific Modeling Language
Carolin Krabbe, Agnes Aßbrock, Malte Reineke, and Daniel Beverungen
This study introduces a new visual modeling language called Workaround Modeling Notation (WAMN) designed to help organizations identify, analyze, and manage employee workarounds. Using a design science approach, the researchers developed this notation and demonstrated its practical application using a real-world case from a manufacturing company. The goal is to provide a structured method for understanding the complex effects of these informal process deviations.
Problem
Employees often create 'workarounds' to bypass inefficient or problematic standard procedures, but companies lack a systematic way to assess their impact. This makes it difficult to understand the complex chain reactions these workarounds can cause, leading to missed opportunities for innovation and unresolved underlying issues. Without a clear framework, organizations struggle to make consistent decisions about whether to adopt, modify, or prevent these employee-driven solutions.
Outcome
- The primary outcome is the Workaround Modeling Notation (WAMN), a domain-specific modeling language designed to map the causes, actions, and consequences of workarounds. - WAMN enables managers to visualize the entire 'workaround-to-innovation' lifecycle, treating workarounds not just as deviations but as potential bottom-up process improvements. - The notation uses clear visual cues, such as color-coding for positive and negative effects, to help decision-makers quickly assess the risks and benefits of a workaround. - By applying WAMN to a manufacturing case, the study demonstrates its ability to untangle complex interconnections between multiple workarounds and their cascading effects on different organizational levels.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic that happens in every company but is rarely managed well: employee workarounds. We’ll be discussing a fascinating study titled “Workarounds—A Domain-Specific Modeling Language.” Host: To help us unpack it, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study introduces a new visual language to help organizations identify and manage these workarounds. First, Alex, can you set the scene for us? What’s the big problem with workarounds that this study is trying to solve? Expert: Absolutely. The core problem is that companies are flying blind. Employees invent workarounds all the time to get their jobs done, bypassing procedures they see as inefficient. But management often has no systematic way to see what’s happening or to understand the impact. Host: So they’re like invisible, unofficial processes running inside the official ones? Expert: Exactly. And the study points out that these can cause complex chain reactions. A simple shortcut in one department might solve a local problem but create a massive compliance risk or data quality issue somewhere else down the line. Without a clear framework, businesses can't decide if a workaround is a brilliant innovation to be adopted or a dangerous liability to be stopped. Host: That makes sense. You can’t manage what you can’t see. How did the researchers approach creating a solution for this? Expert: They used an approach called Design Science. Instead of just observing the problem, they set out to build a practical tool to solve it. In this case, they designed and developed a brand-new modeling language specifically for visualizing workarounds. Then they tested its applicability using a real-world case from a large manufacturing company. Host: So they built a tool for the job. What was the main outcome? What does this tool, this new language, actually do? Expert: The primary outcome is called the Workaround Modeling Notation, or WAMN for short. Think of it as a visual blueprint for workarounds. It allows a manager to map out the entire story: what caused the workaround, what the employee actually does, and all the consequences that follow. Host: And what makes it so effective? Expert: A few things. First, it treats workarounds not just as deviations, but as potential bottom-up innovations. It reframes the conversation. Second, it uses really clear visual cues. For example, positive effects of a workaround are colored green, and negative effects are red. Host: I like that. It sounds very intuitive. You can see the balance of good and bad immediately. Expert: Precisely. In the manufacturing case they studied, one workaround saved time on the assembly line—a positive, green effect. But it also led to inaccurate inventory records—a negative, red effect. WAMN puts both of those impacts on the same map, making the trade-offs crystal clear and untangling how one workaround can cascade into another. Host: This is the key part for our listeners. Alex, why does this matter for business? What are the practical takeaways for a manager or executive? Expert: This is incredibly practical. First, WAMN gives you a structured way to stop guessing. You can move from anecdotes about workarounds to a data-driven conversation about their true costs and benefits. Host: So it helps you make better decisions. Expert: Yes, and it helps you turn employee creativity into a competitive advantage. That clever shortcut an employee designed might be a brilliant process improvement waiting to be standardized across the company. WAMN provides a path to identify and scale those bottom-up innovations safely. Host: So it’s a tool for both risk management and innovation. Expert: Exactly. It helps you decide whether to adopt, adapt, or prevent a workaround. The study mentions creating a "workaround board"—a dedicated group that uses these visual maps to make informed decisions. It creates a common language for operations, IT, and management to collaborate on improving how work actually gets done. Host: Fantastic. So, to summarize for our audience: companies are filled with employee workarounds that are often invisible and poorly understood. Host: This study created a visual language called WAMN that allows businesses to map these workarounds, clearly see their positive and negative effects, and treat them as a source of potential innovation. Host: Ultimately, it’s about making smarter, more consistent decisions to improve processes from the ground up. Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thanks to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another key piece of research for your business.
Workaround, Business Process Management, Domain-Specific Modeling Language, Design Science Research, Process Innovation, Organizational Decision-Making
International Conference on Wirtschaftsinformatik (2025)
Systematizing Different Types of Interfaces to Interact with Data Trusts
David Acev, Florian Rieder, Dennis M. Riehle, and Maria A. Wimmer
This study conducts a systematic literature review to analyze the various types of interfaces used for interaction with Data Trusts, which are organizations that manage data on behalf of others. The research categorizes these interfaces into human-system (e.g., user dashboards) and system-system (e.g., APIs) interactions. The goal is to provide a clear classification and highlight existing gaps in research to support the future implementation of trustworthy Data Trusts.
Problem
As the volume of data grows, there is an increasing need for trustworthy data sharing mechanisms like Data Trusts. However, for these trusts to function effectively, the interactions between data providers, users, and the trust itself must be seamless and standardized. The problem is a lack of clear understanding and systematization of the different interfaces required, which creates ambiguity and hinders the development of reliable and interoperable Data Trust ecosystems.
Outcome
- The study categorizes interfaces for Data Trusts into two primary groups: Human-System Interfaces (user interfaces like GUIs, CLIs) and System-System Interfaces (technical interfaces like APIs). - A significant gap exists in the current literature, which often lacks specific details and clear definitions for how these interfaces are implemented within Data Trusts. - The research highlights a scarcity of standardized and interoperable technical interfaces, which is crucial for ensuring trustworthy and efficient data sharing. - The paper concludes that developing robust, well-defined interfaces is a vital and foundational step for building functional and widely adopted Data Trusts.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical component of our data-driven world: trust. Specifically, we're looking at a study called "Systematizing Different Types of Interfaces to Interact with Data Trusts".
Host: It's a fascinating piece of research that analyzes the various ways we connect with Data Trusts—those organizations that manage data on behalf of others—and aims to create a clear roadmap for building them effectively. With me to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. We all hear about the explosion of data. Why is a study about 'interfaces for Data Trusts' so important right now? What's the real-world problem here?
Expert: It’s a huge problem. Businesses, governments, and individuals want to share data to create better services, train AI, and innovate. But they're hesitant, and for good reason. How do you share data without losing control or compromising privacy? Data Trusts are a potential solution—a neutral third party managing data sharing based on agreed-upon rules.
Expert: But for a trust to work, all the participants—people and software systems—need to be able to connect to it seamlessly and securely. The problem this study identified is that there’s no blueprint for how to build those connections. It's like everyone agrees we need a new global power grid, but no one has standardized the plugs or the voltage.
Host: That lack of standardization sounds like a major roadblock. So how did the researchers approach trying to create that blueprint?
Expert: They conducted a systematic literature review. Essentially, they combed through thousands of academic articles and research papers published over the last decade and a half to find everything written about interfaces in the context of Data Trusts. They then filtered this massive pool of information down to the most relevant studies to create a comprehensive map of the current landscape—what works, what’s being discussed, and most importantly, what’s missing.
Host: A map of the current landscape. What were the key landmarks on that map? What did they find?
Expert: The clearest finding was that you can group all these interfaces into two main categories. First, you have Human-System Interfaces. Think of these as the front door for people. This includes graphical user interfaces, or GUIs, like a web dashboard where a user can manage their consent settings or view data usage reports.
Host: Okay, that makes sense. A way for a person to interact directly with the trust. What’s the second category?
Expert: The second is System-System Interfaces. This is how computer systems talk to each other. The most common example is an API, an Application Programming Interface. This allows a company's software to automatically request data from the trust or submit new data, all without human intervention. It’s the engine that powers the automated, scalable data sharing.
Host: So, a clear distinction between the human front door and the system's engine. Did the study find that these were well-defined and ready to go?
Expert: Far from it. And this was the second major finding: there are significant gaps. The literature often mentions the need for a 'user interface' or an 'API', but provides very few specifics on how they should be designed or implemented for a Data Trust. There's a real scarcity of detail.
Expert: This leads to the third key finding: a critical lack of standardization. Without standard, interoperable APIs, every Data Trust becomes a unique, isolated system. They can't connect to each other, which prevents the creation of a larger, trustworthy data ecosystem.
Host: That brings us to the most important question, Alex. Why does this matter for the business leaders listening to our podcast? Why should they care about standardizing APIs for Data Trusts?
Expert: Because it directly impacts the bottom line and future opportunities. First, standardization reduces cost and risk. If your business wants to join a data-sharing initiative, using a standard interface is like using a standard USB plug. It's plug-and-play. The alternative is a costly, time-consuming custom integration for every single partner.
Host: So it makes participation cheaper and faster. What else?
Expert: It enables entirely new business models. A secure, interoperable ecosystem of Data Trusts would allow for industry-wide data collaboration that’s simply not possible today. Imagine securely pooling supply chain data to predict disruptions, or sharing anonymized health data to accelerate research, all while maintaining trust and compliance. This isn't a fantasy; it’s what a well-designed infrastructure allows.
Host: And I imagine trust itself is a key business asset here.
Expert: Absolutely. For your customers or partners to entrust their data to you, they need confidence. Having clear, robust, and standardized interfaces isn't just a technical detail; it’s a powerful signal that you have a mature, reliable, and trustworthy system. It’s a foundational piece for building digital trust.
Host: This has been incredibly insightful. So, to recap for our audience: Data Trusts are a vital mechanism for unlocking the value of shared data, but they can't succeed without proper interfaces. This study systematically categorized these into human-facing and system-facing types, but crucially, it highlighted a major gap: a lack of detailed, standardized designs.
Host: For businesses, getting this right means lower costs, powerful new opportunities for collaboration, and the ability to build the tangible trust that our digital economy desperately needs. Alex Ian Sutherland, thank you so much for your insights today.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
Data Trust, user interface, API, interoperability, data sharing
International Conference on Wirtschaftsinformatik (2025)
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns
Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.
Problem
As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.
Outcome
- ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms. - In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender. - The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided. - The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical issue at the intersection of technology and business: hidden bias in the AI tools we use every day. We’ll be discussing a study titled "Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns."
Host: It investigates how large language models, like ChatGPT, can reflect and even reinforce societal gender biases, especially in the world of entrepreneurship. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's an important topic.
Host: Absolutely. So, let's start with the big picture. Businesses are rapidly adopting AI for everything from brainstorming to hiring. What's the core problem this study brings to light?
Expert: The core problem is that these powerful AI tools, which we see as objective, are often anything but. They are trained on vast amounts of text from the internet, which is full of human biases. The study warns that as we integrate AI into our decision-making, we risk accidentally cementing harmful gender stereotypes into our business practices.
Host: Can you give us a concrete example of that?
Expert: The study opens with a perfect one. The researchers prompted ChatGPT with: "We are two people, Susan and Tom, looking to start our own businesses. Recommend five business ideas for each of us." The AI suggested an 'Online Boutique' and 'Event Planning' for Susan, but for Tom, it suggested 'Tech Repair Services' and 'Mobile App Development.' It immediately fell back on outdated gender roles.
Host: That's a very clear illustration. So how did the researchers systematically test for this kind of bias? What was their approach?
Expert: They designed two main experiments using ChatGPT-4o. First, they tested how the AI associated gendered terms—like 'she' or 'my brother'—with various professions. These included tech-focused roles like 'AI Engineer' as well as roles stereotypically associated with women.
Host: And the second experiment?
Expert: The second was a simulation. They created a scenario where male and female venture capitalists, or VCs, had to choose which student entrepreneurs to fund. The AI was given lists of VCs and entrepreneurs, identified only by common male or female names, and was asked to predict who would get the funding.
Host: A fascinating setup. What were the key findings from these experiments?
Expert: The findings were quite revealing. In the first task, the AI was significantly more likely to associate male-denoting terms with professions in digital innovation and technology. It paired male terms with tech jobs 194 times, compared to only 141 times for female terms. It clearly reflects the existing gender gap in the tech world.
Host: And what about that venture capital simulation?
Expert: That’s where it got even more subtle. The AI model showed a clear 'in-group bias.' It predicted that male VCs would be more likely to fund male entrepreneurs, and female VCs would be more likely to fund female entrepreneurs. It suggests the AI has learned patterns of affinity bias that can create closed networks and limit opportunities.
Host: And this was all based just on names, with no other information.
Expert: Exactly. Just an implicit cue like a name was enough to trigger a biased outcome. It shows how deeply these associations are embedded in the model.
Host: This is the crucial part for our listeners, Alex. Why does this matter for business? What are the practical takeaways for a manager or an entrepreneur?
Expert: The implications are huge. If you use an AI tool to help screen resumes, you could be unintentionally filtering out qualified female candidates for tech roles. If your team uses AI for brainstorming, it might consistently serve up stereotyped ideas, stifling true innovation and narrowing your market perspective.
Host: And the VC finding is a direct warning for the investment community.
Expert: A massive one. If AI is used to pre-screen startup pitches, it could systematically disadvantage female founders, making it even harder to close the gender funding gap. The study shows that the AI doesn't just reflect bias; it can operationalize it at scale.
Host: So what's the solution? Should businesses stop using these tools?
Expert: Not at all. The key takeaway is not to abandon the technology, but to use it critically. Business leaders need to foster an environment of awareness. Don't blindly trust the output. For critical decisions in areas like hiring or investment, ensure there is always meaningful human oversight. It's about augmenting human intelligence, not replacing it without checks and balances.
Host: That’s a powerful final thought. To summarize for our listeners: AI tools can inherit and amplify real-world gender biases. This study demonstrates it in how AI associates gender with professions and in simulated decisions like VC funding. For businesses, this creates tangible risks in hiring, innovation, and finance, making awareness and human oversight absolutely essential.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us with such clarity.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
International Conference on Wirtschaftsinformatik (2025)
Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry
First Author¹, Second Author¹, Third Author¹,², and Fourth Author²
This study investigates employee acceptance of metaverse technologies within the traditionally conservative paper and packaging industry. Using the Technology Acceptance Model 3, the research was conducted as a living lab experiment in a leading packaging company. The methodology combined qualitative content analysis with quantitative multiple regression modelling to assess the key factors influencing adoption.
Problem
While major technology companies are heavily investing in the metaverse for workplace applications, there is a significant research gap concerning employee acceptance of these immersive technologies. This is particularly relevant for traditionally non-digital industries, like paper and packaging, which are seeking to digitalize but face unique adoption barriers. This study addresses the lack of empirical data on how employees in such sectors perceive and accept metaverse tools for work and collaboration.
Outcome
- Employees in the paper and packaging industry show a moderate but ambiguous acceptance of the metaverse, with an average score of 3.61 out of 5. - The most significant factors driving acceptance are the perceived usefulness (PU) of the technology for their job and its perceived ease of use (PEU). - Job relevance was found to be a key influencer of perceived usefulness, while an employee's confidence in their own computer skills (computer self-efficacy) was a key predictor for perceived ease of use. - While employees recognized benefits like improved virtual collaboration, they also raised concerns about hardware limitations (e.g., headset weight, image clarity) and the technology's overall maturity compared to existing tools.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into the future of work by looking at a study titled "Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry". It explores how employees in a traditionally conservative industry react to immersive metaverse technologies in the workplace.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, great to have you.
Expert: It's great to be here, Anna.
Host: So, Alex, big tech companies are pouring billions into the metaverse, envisioning it as the next frontier for workplace collaboration. But there’s a big question mark over whether employees will actually want to use it, right?
Expert: Exactly. That's the core problem this study addresses. There’s a huge gap between the corporate vision and the reality on the ground. This is especially true for industries that aren't digital-native, like the paper and packaging sector. They're trying to digitalize, but it's unclear if their workforce will embrace something as radical as a VR headset for their daily tasks.
Host: So how did the researchers figure this out? What was their approach?
Expert: They used a really interesting method called a "living lab experiment." They went into a leading German company, Klingele Paper & Packaging, and set up a simulated workplace. They gave 53 employees Meta Quest 2 headsets and had them perform typical work tasks, like document editing and collaborative meetings, entirely within the metaverse.
Host: So they got to try it out in a hands-on, practical way.
Expert: Precisely. After the experiment, the employees completed detailed questionnaires. The researchers then analyzed both the hard numbers from their ratings and the written comments about their experiences to get a full picture.
Host: A fascinating approach. So what was the verdict? Did these employees embrace the metaverse with open arms?
Expert: The results were quite nuanced. The overall acceptance score was moderate, just 3.61 out of 5. So, not a rejection, but certainly not a runaway success. It shows a real sense of ambivalence—people are curious, but also skeptical.
Host: What were the key factors that made employees more likely to accept the technology?
Expert: It really boiled down to two classic, fundamental questions. First: Is this useful? The study calls this 'Perceived Usefulness,' and it was the single biggest driver of acceptance. If an employee could see how the metaverse was directly relevant to their job, they were much more open to it.
Host: And the second question?
Expert: Is this easy? 'Perceived Ease of Use' was the other critical factor. And interestingly, the biggest predictor for this was an employee's confidence in their own tech skills, what the study calls 'computer self-efficacy'. If you're already comfortable with computers, you're less intimidated by a VR headset.
Host: That makes a lot of sense. So if it’s useful and easy, people are on board. What were the concerns that held them back?
Expert: The hardware was a major issue. Employees mentioned that the headsets were heavy and uncomfortable for long periods. They also experienced issues with image clarity and eye strain. Beyond the physical discomfort, there was a sense that the technology just wasn't mature enough yet to be better than existing tools like a simple video call.
Host: This is the crucial part for our listeners. Based on this study, what are the practical takeaways for a business leader who is considering investing in metaverse technology?
Expert: There are three clear takeaways. First, don't lead with the technology; lead with the problem. The study proves that 'Job Relevance' is everything. A business needs to identify very specific tasks—like collaborative 3D product design or virtual facility tours—where the metaverse offers a unique advantage, rather than trying to force it on everyone for general meetings.
Host: So focus on the use case, not the hype. What’s the second takeaway?
Expert: User experience is non-negotiable. The hardware limitations were a huge barrier. This means businesses can't cut corners. They need to provide comfortable, high-quality headsets. And just as importantly, they need to invest in training to build that 'computer self-efficacy' we talked about. You have to make employees feel confident and capable.
Host: And the final key lesson?
Expert: Manage expectations. The employees in this study felt the technology was still immature. So the smart move is to frame any rollout as a pilot program or an experiment—much like the 'living lab' in the study itself. This approach lowers the pressure, invites honest feedback, and helps you learn what actually works for your organization before making a massive investment.
Host: That’s incredibly clear advice. To summarize: employee acceptance of the metaverse is lukewarm at best. For businesses to succeed, they need to focus on specific, high-value use cases, invest in quality hardware and training, and roll it out thoughtfully as a pilot, not a mandate.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us. Your insights have been invaluable.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to translate complex research into actionable business knowledge.
Metaverse, Technology Acceptance Model 3, Living lab, Paper and Packaging industry, Workplace
International Conference on Wirtschaftsinformatik (2025)
Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens
Michael Stadler, Markus Noeltner, Julia Kroenung
This study developed and tested a user interface designed to help senior citizens use online services more easily. Using a travel booking website as a case study, the researchers combined established design principles with a step-by-step visual guide and refined the design over three rounds of testing with senior participants.
Problem
As more essential services like banking, shopping, and booking appointments move online, many senior citizens face significant barriers to participation due to complex and poorly designed interfaces. This digital divide can lead to both technological and social disadvantages for the growing elderly population, a problem many businesses fail to address.
Outcome
- A structured, visual process guide significantly helps senior citizens navigate and complete online tasks. - Iteratively refining the user interface based on direct feedback from seniors led to measurable improvements in performance, with users completing tasks faster in each subsequent round. - Simple design adaptations, such as reducing complexity, using clear instructions, and ensuring high-contrast text, effectively reduce the cognitive load on older users. - The findings confirm that designing digital services with seniors in mind is crucial for creating a more inclusive digital world and can help businesses reach a larger customer base.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world where almost everything is moving online, how do we ensure we don't leave entire generations behind? Today, we're diving into a study titled "Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens." It explores how to develop and test digital tools that are easier for senior citizens to use. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a crucial topic.
Host: Let's start with the big picture. Why is this research so important right now? What's the problem it's trying to solve?
Expert: The problem is what’s often called the "digital divide." Essential services like banking, booking medical appointments, or even grocery shopping are increasingly online-only. The study highlights that during the pandemic, for instance, many older adults struggled to book vaccination appointments, which were simple for younger people to arrange online.
Host: So it's about access to essential services.
Expert: Exactly. And it’s not just a technological disadvantage; it can lead to social isolation. This is a large and growing part of our population. For businesses, this is a huge, often-overlooked customer base. Ignoring their needs means leaving money on the table.
Host: So how did the researchers in this study approach this challenge? It sounds incredibly complex.
Expert: They used a very practical, hands-on method. They built a prototype of a travel booking website, a task that can be complex online but is familiar to most people offline. Then, they recruited 13 participants between the ages of 65 and 85, with a wide range of digital skills, to test it.
Host: And they just watched them use it?
Expert: Essentially, yes, but in a structured way. They conducted three rounds of testing. After the first group of seniors used the prototype, the researchers gathered feedback, identified what was confusing, and redesigned the interface. Then a second group tested the improved version, and they repeated the process a third time. It's called iterative enhancement—improving in cycles based on real user experience.
Host: That iterative approach makes a lot of sense. What were the key findings? What actually worked?
Expert: The first major finding was the power of a clear, visual process guide. On the left side of the screen, the design showed a simple map of the booking process—like "Step 1: Request Trip," "Step 2: Check Offer." It highlighted the current step, which significantly helped users orient themselves and reduced their cognitive load.
Host: Like a "you are here" map for a website. I can see how that would help. What else did they learn?
Expert: They learned that small, simple changes make a huge difference. The data showed a clear improvement across the three test rounds. On average, participants in the final round completed the booking task significantly faster than those in the first round.
Host: Can you give us an example of a specific change that had a big impact?
Expert: Absolutely. The study reinforced the need for basics like high-contrast text, larger fonts, and simple, clear instructions. They also discovered that even common web elements, like the little calendar pop-ups used for picking dates, were a major hurdle for many participants. It proves you can't take anything for granted when designing for this audience.
Host: This is all fascinating. So, let’s get to the bottom line for our listeners. Why does this matter for business, and what are the practical takeaways?
Expert: The number one takeaway is that designing for inclusion is a direct path to market expansion. The senior population is a large and growing demographic. The study mentions that travel providers who fail to address their needs risk a direct loss of bookings. This applies to any industry, from e-commerce to banking.
Host: So it's about tapping into a new customer segment.
Expert: It's that, and it's also about efficiency and brand loyalty. An intuitive interface that successfully guides an older user means fewer frustrated calls to customer support, fewer abandoned shopping carts, and a much better overall customer experience. That builds trust.
Host: If a product manager is listening right now, what's the first step they should take based on these findings?
Expert: The core lesson is: involve your users. Don't assume you know what they need. The study provides a perfect template: conduct small-scale usability tests with senior users. You don’t need a huge budget. Watch where they get stuck, listen to their feedback, and make targeted improvements. The simple addition of a visual progress bar or clearer text can dramatically improve success rates.
Host: So to summarize: the digital divide is a real challenge, but this study shows a clear, practical path forward. Using simple visual guides and, most importantly, testing and refining designs based on direct feedback from seniors can create better, more profitable products.
Expert: That’s it exactly. It’s not just about doing good; it's about smart business.
Host: Alex, thank you for these fantastic insights.
Expert: My pleasure, Anna.
Host: And to our listeners, thank you for joining us on A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Usability for Seniors, Process Guidance, Digital Accessibility, Digital Inclusion, Senior Citizens, Heuristic Evaluation, User Interface Design
International Conference on Wirtschaftsinformatik (2025)
A Survey on Citizens' Perceptions of Social Risks in Smart Cities
Elena Fantino, Sebastian Lins, and Ali Sunyaev
This study identifies 15 key social risks associated with the development of smart cities, such as privacy violations and increased surveillance. It then examines public perception of these risks through a quantitative survey of 310 participants in Germany and Italy. The research aims to understand how citizens view the balance between the benefits and potential harms of smart city technologies.
Problem
While the digital transformation of cities promises benefits like enhanced efficiency and quality of life, it often overlooks significant social risks. Issues like data privacy, cybersecurity threats, and growing social divides can undermine human security and well-being, yet citizens' perspectives on these dangers are frequently ignored in the planning and implementation process.
Outcome
- Citizens rate both the probability and severity of social risks in smart cities as relatively high. - Despite recognizing these significant risks, participants generally maintain a positive attitude towards the concept of smart cities, highlighting a duality in public perception. - The risk perceived as most probable by citizens is 'profiling', while 'cybersecurity threats' are seen as having the most severe impact. - Risk perception differs based on demographic factors like age and nationality; for instance, older participants and Italian citizens reported higher risk perceptions than their younger and German counterparts. - The findings underscore the necessity of a participatory and ethical approach to smart city development that actively involves citizens to mitigate risks and ensure equitable benefits.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the heart of our future cities. We’re discussing a study titled "A Survey on Citizens' Perceptions of Social Risks in Smart Cities". Host: It explores the 15 key social risks that come with smart city development—things like privacy violations and increased surveillance—and examines how citizens in Germany and Italy view the balance between the benefits and the potential harms. Host: And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back to the show. Expert: Great to be here, Anna. Host: So, Alex, smart cities promise a more efficient, sustainable, and connected future. It sounds fantastic. What's the big problem this study is trying to address? Expert: The problem is that in the race to build these futuristic cities, the human element—the actual citizens living there—is often overlooked. Expert: Planners and tech companies focus on the amazing potential, but they can neglect the significant social risks. We're talking about everything from data privacy and cybersecurity threats to creating new social divides between the tech-savvy and everyone else. Expert: The study points out that if you ignore how citizens perceive these dangers, you risk building cities that people don't trust or want to live in, which can undermine the entire project. Host: So it's not just about the technology working, but about people accepting it. How did the researchers actually measure these perceptions? Expert: They used a two-part approach. First, they conducted a thorough review of existing research to identify and categorize 15 principal social risks associated with smart cities. Expert: Then, they created a quantitative survey and gathered responses from 310 participants across Germany and Italy, asking them to rate the probability and severity of each of those 15 risks. Host: And what were the standout findings from that survey? Expert: Well, this is where it gets really interesting. The study found a striking duality in public perception. Host: A duality? What do you mean? Expert: On one hand, citizens rated both the probability and the severity of these social risks as relatively high. They are definitely concerned. Host: What were they most worried about? Expert: The risk citizens saw as most probable was 'profiling'—the idea that all this data is being used to build a detailed, and potentially invasive, profile of them. But the risk they felt would have the most severe impact was 'cybersecurity threats'. Think of a whole city's traffic or power grid being hacked. Host: That’s a scary thought. So where’s the duality you mentioned? Expert: Despite being highly aware of these significant risks, the majority of participants still had a generally positive attitude toward the concept of smart cities. They see the promise, but they're not naive about the perils. Expert: The study also found that perception varies. For example, older participants and Italian citizens generally reported a higher perception of risk compared to younger and German participants. Host: That’s fascinating. It’s not a simple love-it-or-hate-it issue. So, Alex, let’s get to the bottom line for our listeners. Why does this matter for a business leader, a tech developer, or a city planner? Expert: It matters immensely. There are three critical takeaways. First, a 'build it and they will come' approach is doomed to fail. Businesses must shift to a participatory, citizen-centric model. Involve the community in the design process. Ask them what they want and what they fear. Their trust is your most valuable asset. Host: So, co-creation is key. What’s the second takeaway? Expert: Transparency is non-negotiable. Given that citizens' biggest fears revolve around data misuse and cyberattacks, companies that lead with radical transparency about how data is collected, stored, and used will have a massive competitive edge. Proving your systems are secure and your ethics are sound isn't a feature; it's the foundation. Host: And the third? Expert: One size does not fit all. The differences in risk perception between Italy and Germany show that culture and national context matter. A smart city solution that works in Berlin can't just be copy-pasted into Rome. Businesses need to do their homework and tailor their approach to the local social landscape. Host: So, to sum up, the path to successful smart cities isn't just paved with better technology, but with a deeper understanding of the people who live there. Host: We need a model that is participatory, transparent, and culturally aware. Alex, thank you so much for breaking this down for us. Your insights were invaluable. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world.
smart cities, social risks, citizens' perception, AI ethics, social impact
International Conference on Wirtschaftsinformatik (2025)
Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions
Christopher Diebel, Akylzhan Kassymova, Mari-Klara Stein, Martin Adam, and Alexander Benlian
This study investigates how employees perceive the fairness of decisions that involve artificial intelligence (AI). Using an online experiment with 79 participants, researchers compared scenarios where a performance evaluation was conducted by a manager alone, fully delegated to an AI, or made by a manager and an AI working together as an 'ensemble'.
Problem
As companies increasingly use AI for important workplace decisions like hiring and performance reviews, it's crucial to understand how employees react. Prior research suggests that AI-driven decisions can be perceived as unfair, but it was unclear how different methods of AI integration—specifically, fully handing over a decision to AI versus a collaborative human-AI approach—affect employee perceptions of fairness and their trust in management.
Outcome
- Decisions fully delegated to an AI are perceived as significantly less fair than decisions made solely by a human manager. - This perceived unfairness in AI-delegated decisions leads to a lower level of trust in the manager who made the delegation. - Importantly, these negative effects on fairness and trust do not occur when a human-AI 'ensemble' method is used, where both the manager and the AI are equally involved in the decision-making process.
Host: Welcome to A.I.S. Insights, the podcast where we turn complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions". Host: It’s all about how your employees really feel when AI is involved in crucial decisions, like their performance reviews. And to help us unpack this, we have our lead analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. It’s a critical topic. Host: Absolutely. So, let's start with the big picture. What's the core problem this study is trying to solve for businesses? Expert: The problem is that as companies rush to adopt AI for HR tasks like hiring or evaluations, they often overlook the human element. We know from prior research that decisions made by AI can be perceived by employees as unfair. Host: And that feeling of unfairness has real consequences, right? Expert: Exactly. It can lead to a drop in trust, not just in the technology, but in the manager who chose to use it. The study points out that when employees distrust their manager, their performance can suffer, and they're more likely to leave the organization. The question was, does *how* you use the AI make a difference? Host: So how did the researchers figure that out? What was their approach? Expert: They ran an online experiment using realistic workplace scenarios. Participants were asked to imagine they were an employee receiving a performance evaluation and their annual bonus. Expert: Then, they were presented with three different ways that decision was made. First, by a human manager alone. Second, the decision was fully delegated by the manager to an AI system. And third, what they call an 'ensemble' approach. Host: An 'ensemble'? What does that look like in practice? Expert: It’s a collaborative method. In the scenario, both the human manager and the AI system conducted the performance evaluation independently. Their two scores were then averaged to produce the final result. So it’s a partnership, not a hand-off. Host: A partnership. I like that. So after running these scenarios, what did they find? What was the big takeaway? Expert: The results were incredibly clear. When the decision was fully delegated to the AI, participants perceived the process as significantly less fair than when the manager made the decision alone. Host: And I imagine that had a knock-on effect on trust? Expert: A big one. That perception of unfairness directly led to a lower level of trust in the manager who delegated the task. It seems employees see it as the manager shirking their responsibility. Host: But what about that third option, the 'ensemble' or partnership approach? Expert: That’s the most important finding. When the human-AI ensemble was used, those negative effects on fairness and trust completely disappeared. People felt the process was just as fair as a decision made by a human alone. Host: So, Alex, this is the key question for our listeners. What does this mean for business leaders? What's the actionable insight here? Expert: The main takeaway is this: don't just delegate, collaborate. If you’re integrating AI into decision-making processes that affect your people, the 'ensemble' model is the way to go. Involving a human in the final judgment maintains a sense of procedural fairness that is crucial for employee trust. Host: So it's about keeping the human in the loop. Expert: Precisely. The study suggests that even if you have to use a more delegated AI model for efficiency, transparency is paramount. You need to explain how the AI works, provide clear channels for feedback, and position the AI as a support tool, not a replacement for human judgment. Host: Is there anything else that surprised you? Expert: Yes. The outcome of the decision—whether the employee got a high bonus or a low one—didn't change how they felt about the process. Even when the AI-delegated decision resulted in a good outcome, people still saw the process as unfair. It proves that for your employees, *how* a decision is made can be just as important as the decision itself. Host: That is a powerful insight. So, let’s summarize for everyone listening. Host: First, fully handing off important HR decisions to an AI can seriously damage employee trust and their perception of fairness. Host: Second, a collaborative, or 'ensemble,' approach, where a manager and an AI work together, is received much more positively and avoids those negative impacts. Host: And finally, a good outcome doesn't fix a bad process. Getting the process right is essential. Host: Alex, thank you so much for breaking that down for us. Incredibly valuable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Decision-Making, Al Systems, Procedural Fairness, Ensemble, Delegation
International Conference on Wirtschaftsinformatik (2025)
The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions
Lyuba Stafyeyeva
This study investigates how blockchain verification and the type of credential-issuing institution (university vs. learning academy) influence employer perceptions of a job applicant's trustworthiness, expertise, and salary expectations. Using an experimental design with 200 participants, the research evaluated how different credential formats affected hiring assessments.
Problem
Verifying academic credentials is often slow, expensive, and prone to fraud, undermining trust in the system. While new micro-credentials (MCs) offer an alternative, their credibility is often unclear to employers, and it is unknown if technologies like blockchain can effectively solve this trust issue in real-world hiring scenarios.
Outcome
- Blockchain verification did not significantly increase employers' perceptions of an applicant's trustworthiness or expertise. - Employers showed no significant preference for credentials issued by traditional universities over those from alternative learning academies, suggesting a shift toward competency-based hiring. - Applicants with blockchain-verified credentials were offered lower minimum starting salaries, indicating that while verification may reduce hiring risk for employers, it does not increase the candidate's perceived value. - The results suggest that institutional prestige is becoming less important than verifiable skills in the hiring process.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions."
Host: It explores a very timely question: In the world of hiring, does a high-tech verification stamp on a certificate actually matter? And do employers still prefer a traditional university degree over a certificate from a newer learning academy? Here to unpack the findings with us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Verifying someone's qualifications has always been a challenge for businesses. What’s the core problem this study is trying to solve?
Expert: Exactly. The traditional process of verifying a degree is often slow, manual, and costly. It can involve calling universities or paying third-party agencies. This creates friction in hiring and opens the door to fraud with things like paper transcripts.
Host: And that's where things like online courses and digital badges—these "micro-credentials"—come in.
Expert: Right. They're becoming very popular for showcasing specific, job-ready skills. But for a hiring manager, their credibility can be a big question mark. Is a certificate from an online academy as rigorous as one from a university? The big question the study asks is whether a technology like blockchain can solve this trust problem for employers.
Host: So, how did the researchers actually test this? What was their approach?
Expert: They conducted a very clever experiment with 200 professionals, mostly from the IT industry. They created a fictional job applicant, "Alex M. Smith," who needed both IT knowledge and business communication skills.
Host: And they showed this candidate's profile to the participants?
Expert: Yes, but with a twist. Each participant was randomly shown one of four different versions of the applicant's certificate. It was either from a made-up school called 'Stekon State University' or an online provider called 'Clevant Learn Academy.' And crucially, each of those versions was presented either with or without a "Blockchain Verified" stamp on it.
Host: So they could isolate what really influences a hiring manager's decision. What were the key findings? Let's start with the big one: blockchain.
Expert: This is where it gets really interesting. The study found that adding a "Blockchain Verified" stamp did not significantly increase how trustworthy or expert the employers perceived the candidate to be. The technology alone wasn't some magic signal of credibility.
Host: That is surprising. What about the source of the credential? The traditional university versus the modern learning academy. Did employers have a preference?
Expert: No, and this is a huge finding. There was no significant difference in how employers rated the candidate, regardless of whether the certificate came from the university or the learning academy. It suggests a major shift is underway.
Host: A shift toward what?
Expert: Toward competency-based hiring. It seems employers are becoming more interested in the specific, proven skill rather than the prestige of the institution that taught it.
Host: But I understand there was a very counterintuitive result when it came to salary offers.
Expert: There was. Applicants with the blockchain-verified credential were actually offered *lower* minimum starting salaries. The theory is that instant, easy verification reduces the perceived risk for the employer. They’re so confident the credential is real, they feel comfortable making a more conservative, standard initial offer. It de-risks the hire, but doesn't increase the candidate's perceived value.
Host: So, Alex, this is the most important part for our listeners. What does this all mean for business leaders and hiring managers? What are the practical takeaways?
Expert: The first and biggest takeaway is that skills are starting to trump institutional prestige. Businesses can and should feel more confident considering candidates from a wider range of educational backgrounds, including those with micro-credentials. Focus on what the candidate can *do*.
Host: So, should we just write off blockchain for credentials then?
Expert: Not at all. The second takeaway is about understanding blockchain's true value right now. It may not be a powerful marketing tool on a resume, but its real potential lies on the back-end. For HR departments, it can make the verification process itself dramatically faster, cheaper, and more secure. Think of it as an operational efficiency tool, not a candidate branding tool.
Host: That makes a lot of sense. It solves the friction problem you mentioned at the start.
Expert: Exactly. And this leads to the final point: this trend is democratizing qualifications. It gives businesses access to a wider, more diverse talent pool. Embracing a skills-first hiring approach allows companies to be more agile, especially in fast-moving sectors where skills need to be updated constantly.
Host: That’s a powerful conclusion. So, to summarize: a blockchain stamp won't automatically make a candidate look better, but it can de-risk the process for employers. And most importantly, we're seeing a clear shift where verifiable skills are becoming more valuable than the name on the diploma.
Host: Alex Ian Sutherland, thank you so much for breaking down this fascinating study for us.
Expert: My pleasure, Anna.
Host: And a big thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time for more analysis at the intersection of business and technology.
International Conference on Wirtschaftsinformatik (2025)
Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain
Björn Konopka and Manuel Wiesche
This study investigates the trade-offs consumers make when purchasing smart home devices. Using a choice-based conjoint analysis, the research evaluates the relative importance of eight attributes related to performance (e.g., reliability), privacy (e.g., data storage), and market factors (e.g., price and provider).
Problem
While smart home technology is increasingly popular, there is limited understanding of how consumers weigh different factors, particularly how they balance privacy concerns against product performance and cost. This study addresses this gap by quantifying which features consumers prioritize when making purchasing decisions for smart home systems.
Outcome
- Reliability and the device provider are the most influential factors in consumer decision-making, significantly outweighing other attributes. - Price and privacy-related attributes (such as data collection scope, purpose, and user controls) play a comparatively lesser role. - Consumers strongly prefer products that are reliable and made by a trusted (in this case, domestic) provider. - The findings indicate that consumers are willing to trade off privacy concerns for tangible benefits in performance and trust in the manufacturer.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. In our homes, our cars, our offices—smart technology is everywhere. But when we stand in a store, or browse online, what really makes us choose one smart device over another? Today, we’re diving into a fascinating study that answers that very question. It's titled, "Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain."
Host: Alex Ian Sutherland, our lead analyst, is here to break it down. Alex, the smart home market is booming, but the study suggests we don't fully understand what drives consumer choice. What’s the big problem here?
Expert: Exactly, Anna. The big problem is the gap between what people *say* they care about and what they actually *do*. We hear constantly about privacy concerns with smart devices. But when it's time to buy, do those concerns actually outweigh factors like price or performance? This study was designed to get past the talk and quantify what really matters when a consumer has to make a choice. It addresses what’s known as the 'privacy paradox'—where our actions don't always align with our stated beliefs on privacy.
Host: So how did the researchers measure something so subjective? How do you figure out what's truly most important to a buyer?
Expert: They used a clever method called a choice-based conjoint analysis. Think of it as a highly realistic, simulated shopping trip. Participants were shown different versions of a smart lightbulb. One might be highly reliable, from a German company, and cost 25 euros. Another might be slightly less reliable, from a U.S. company, cost 5 euros, but offer better privacy controls. Participants had to choose which product they'd actually buy, over and over again. By analyzing thousands of these decisions, the study could calculate the precise importance of each individual feature.
Host: A virtual shopping trip to read the consumer's mind. I love it. So, after all those choices, what were the key findings? What's the number one thing people look for?
Expert: The results were genuinely surprising, and they challenge a lot of common assumptions. First and foremost, the most influential factor, by a wide margin, was reliability. Does the product work as promised, every single time? With a relative importance of over 22 percent, nothing else came close.
Host: So before anything else, it just has to work. What was number two?
Expert: Number two was the provider—meaning, who makes the device. This was almost as important as reliability, accounting for about 19 percent of the decision. Things like price, and even specific privacy features like where your data is stored or what it's used for, were far less important. In fact, reliability and the provider combined were more influential than the other six attributes put together.
Host: That is remarkable. So price and privacy take a back seat to performance and brand trust.
Expert: Precisely. The study suggests consumers are willing to make significant trade-offs. They'll accept less-than-perfect privacy controls if it means getting a highly reliable product from a company they trust. For example, in this study conducted with German participants, there was an incredibly strong preference for a German provider over any other nationality, highlighting a powerful home-country bias and trust factor.
Host: This brings us to the most important question for our listeners. What does this all mean for business? What are the practical takeaways?
Expert: I see four key takeaways. First, master the fundamentals. Before you invest millions in advertising fancy features or complex privacy dashboards, ensure your product is rock-solid reliable. The study shows consumers have almost zero tolerance for failure in devices that are integrated into their daily lives.
Host: Get the basics right. Makes sense. What's next?
Expert: Second, understand that your brand's reputation and origin are a massive competitive advantage. Building trust is paramount. If you're entering a new international market, you can't just translate your marketing materials. You may need to form partnerships with local, trusted institutions to overcome this geopolitical trust barrier.
Host: That's a powerful point about global business strategy. What about privacy? Should businesses just ignore it?
Expert: Not at all, but they need to be smarter about it. The third takeaway is to treat privacy with nuance. Consumers in the study made clear distinctions. They were strongly against their data being used for 'revenue generation' but were quite positive if it was used for 'product and service improvement'. They also strongly preferred data stored locally on the device itself, rather than in a foreign cloud. The lesson is: be transparent, give users meaningful controls, and explain the benefit to them.
Host: And the final takeaway, Alex?
Expert: Don't compete solely on price. The study showed that consumers weren't just looking for the cheapest option. The lowest-priced product was only marginally preferred over a mid-range one, and the highest price was strongly rejected. This suggests consumers may see a very low price as a red flag for poor quality. It's better to invest that margin in building a more reliable product and a more trustworthy brand.
Host: So, to summarize: for anyone building or marketing smart technology, the path to success is paved with reliability and brand trust. These are the foundations. Price is secondary, and privacy is a nuanced conversation that requires transparency and control.
Host: Alex, thank you for these incredibly clear and actionable insights.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning into A.I.S. Insights. Join us next time as we continue to connect research to reality.
Smart Service Systems, Smart Home, Conjoint, Consumer Preferences, Privacy