top of page

Redesign of Error Messaging

A data-driven, strategic initiative to increase user autonomy and reduce support tickets for SAP Build Process Automation.

Project overview

Challenge

Through an AI-assisted analysis of all support tickets from 2025, I identified significant optimization potential around error messaging. This data-driven insight, combined with existing customer feedback and other problem statements related to error messages, confirmed the high priority of improvements and gained strong support from the leadership team.
As the initiating UX designer, my challenge was to develop a systematic solution that goes beyond superficial fixes. The goal was to create a scalable, user-centered framework that measurably reduces support costs by increasing user autonomy.

Objectives

  • Gradual reduction of support ticket volume

  • Increase in self-resolution rate

  • Establishment of a consistent, scalable, and understandable framework for all error messages

  • Implementation of AI-supported assistance in error messaging

Problems

  • Up to 45% of all support tickets are related to unclear error messages, tying up valuable developer resources in support tasks.

  • Vague messages without actionable guidance created artificial dead-ends that halted users’ productivity and caused frustration.

  • A lack of context in error messages forced time-consuming follow-ups and significantly delayed issue resolution during support.

Project scope

  • UX Research

  • UX Strategy & Systems Thinking

  • Product Design 

Duration

  • November 2025  - December 2025

Role

  • Lead UX

Tools

  • Figma

  • Mural

  • ServiceNow

  • Confluence/Jira

Approach

01.

Problem discovery &

strategic framing

02.

Framework design & prototyping

03.

Multi-track validation with users and developers

04.

Synthesis &

Strategic Roadmap

  01. Problem discovery &
strategic framing

Through a cross-cutting analysis of support tickets I uncovered a critical issue with high business value. I quantified the business impact to elevate the initiative from a pure UI tweak to a strategic project.

Proactively, I put error-message improvements into the quarterly planning. An analysis of ServiceNow tickets (Q1–Q4 2025) showed a consistent 35–45% of all support requests were related to unclear error messages. This metric provided an incontrovertible data basis to prioritise the topic. The analysis also identified the four most critical areas where these shortcomings generated the highest support costs and greatest frustration.

1

Projekt-Deployment & -Release

3

Ai Services & DOX

work-collaboration-3 (1).png

2

Externe Konnectivität

(Destinations, Actions, and Mail Server)

4

In-Studio Authoring Experience (Saving & Editing)

From the follow-up qualitative analysis a clear picture of the current experience emerged:

  • Intransparent & non-actionable: Generic messages like “Something went wrong” offered no path to resolution.

  • Poor discoverability: Critical information was so hidden that 5 out of 6 users immediately opened developer tools (F12) to diagnose.

  • Misleading content: Available details were often inaccurate and sent users on long searches.
     

From these findings I derived three clear user needs that formed the foundation for the solution:

  1. Clear & actionable content: An understandable explanation of the problem and the steps to resolve it.

  2. Contextual information: The exact step and data that triggered the error.

  3. Direct navigation: Clickable error messages that lead users directly to the problem source.
     

The complete synthesis of this initial project phase is available here as a document.

   02. Framework design & prototyping

Based on the discovery findings, I developed a scalable design framework instead of isolated one-off solutions. Targeted prototypes for four "Assistance Levels" served as the basis to systematically test hypotheses about how to support users.

The anatomy of an effective error message is simple:

  • It names the problem,

  • explains the cause,

  • and guides the user to a solution.

SAP Fiori guidelines provided the visual direction, but they did not define distinct assistance levels or their interplay — both prerequisites for a scalable framework. I therefore visualized all viable content variations for the assistance levels as wireframes and analyzed them against the problem areas identified in Phase 1 to find the most effective combinations. Other forms (e.g., banners, toast messages) were explored where appropriate.

From this exploration four core patterns emerged and were converted into testable assistance levels with corresponding hypotheses:

  • Level 1: Precise text message for simple errors.
    H1: Short, action‑focused text increases first‑contact self‑resolution.

  • Level 2: Context-sensitive link to help documentation.
    H2: With increasing assistance level, user frustration around errors decreases.

  • Level 3: Direct deep link that navigates to the error source.
    H3: Deep links increase perceived self‑recoverability and reduce time‑to‑recovery.

  • Level 4: AI-powered solution suggestions via Joule.
    H4: AI‑provided help reduces frustration.
    H5: AI‑provided help reduces users’ need to consume contextual details themselves.

These concepts were combined into an interactive prototype to validate their effectiveness in user testing.

03. Multi-track validation with users
and developers

In a dual validation approach I tested the concepts both with users and for feasibility with developers. This process not only revealed user needs but also uncovered root causes and potential solutions for the current error-message problems.

3.1 Qualitative user validation

Participants

  • 10 participants in total: 6 initial sessions + 4 follow‑ups once we realized novice vs. expert differences needed better coverage.

  • Mixed experience levels: novices and experts; varied technical skill; focus on SBPA runtime workflows.

Method

  • Moderated 60min remote sessions using the think‑aloud method.

  • Tasks focused on the critical use cases identified in Phase 1.

Observed metrics:

  • Problem recognition — do users correctly identify the issue?

  • Recoverability — can users state how the error can be resolved?

  • Behavioral reactions — what actions do users take?

  • Perceived ease / satisfaction — measured via SEQ

Level
User Perception
Impact on Actionability
Representative Quote
📄 Level 1: Text Only
Effective, but only for simple issues. Seen as sufficient and straightforward for specific errors (Use Case C) but frustrating for undefined problems (Use Case B).
High. Users knew exactly what to do next: either fix the simple error or follow the procedural steps (retry/contact support).
 "This is pretty straightforward because it tells exactly what’s the issue like duplicate name."
🔗 Level 2: Help Portal Link
Helpful. Universally seen as a positive, low-effort addition that saves users from having to search for documentation manually.
High. Provided a clear next step for users who needed more context to understand the syntax or rules.
"What I really like is the help link because… normally I would then like type in build help process automation and just of course it’s one step easier."
➡️ Level 3: Actionable Link
Extremely Positive. The "Go to artifact" link was a standout feature, perceived as a highly efficient and direct way to resolve issues.
Very High. Provided the clearest and fastest path to resolution by eliminating the need for users to manually search for the problematic component.
"With my experience… it’s five, especially also with the point. I can now just jump to it."
🤖 Level 4: AI Fix / Create
Mixed & Skeptical. While the idea of AI help was exciting, users were skeptical of a generic "Ask Joule" prompt, assuming it would provide a non-specific answer.
Low to Moderate. Most experienced users stated they would ignore the AI for complex errors and investigate manually due to a lack of trust. New users were more curious to try.
"I would now currently assume that I would get a generic answer on the type of AH, OK Yeah, all of your projects must be valid before releasing the project."

Summary: User reactions revealed a clear hierarchy: a direct navigation link (Level 3) delivered the highest efficiency and acceptance; AI support (Level 4) was met with marked skepticism, especially among experienced users. Overall, experts seek deep context to accelerate workflows, while novice users prioritise simple, step‑by‑step guidance.

3.2 Technical discovery & architectural alignment

In the second part of the validation, I conducted interviews to assess the technical feasibility of our concepts and explore solution approaches.

Participants

  • 6 participants

  • Varied work experience: 3–25 years; roles included Lead Developer, Manager, and Architect

Method

  • Semi-structured, exploratory remote interviews lasting 60–90 minutes

The four key findings revealed fundamental, systemic causes for poor error messages which are:

  1. Information loss across system boundaries is the core problem. Detailed error information is not propagated along the microservices chain. Only an architectural, cross-team "Error Contract" can ensure structured error data is consistently passed to the user interface.
     

  2. Feasibility of deep links: the user-favoured deep link is technically implementable for most process errors at design time, provided the necessary context IDs are included. This confirmed our user-validation results.

     

  3. Potential for an "Easy Bug Report" workflow: developers confirmed that a simple internal process for reporting poor error messages could continuously improve quality with moderate effort.

     

  4. Foundations for AI assistance: transmitting structured error data (Log ID, Error Code, Model ID) is essential for genuinely helpful AI. A generic "ask me" feature without this context will be viewed critically. A gradual rollout of AI help, limited to cases where a solution can be provided, is necessary to build trust.

contract.png
link.png
bug.png
ai.png

From the interview synthesis many more approaches were derived. All 18 potential initiatives to improve error messages are available to review:

04. Synthesis & strategic roadmap

I consolidated the insights from user validation and technical discovery into a strategic roadmap. It prioritises concrete measures into short-term quick wins and long-term, fundamental improvements.

By synthesising user needs and technical findings I developed a pragmatic, impact‑oriented roadmap. It prioritises solutions by effort and impact and balances quick wins with long‑term architectural fixes.

Product roadmap timeline (Community).jpg

Strategic recommendations across three horizons:

  • Immediate actions: Implement deep links and UX improvements for the top‑4 error scenarios. Introduce a standardised “Technical Details” block with a copyable Log ID to speed up support.

  • Tactical initiatives: Establish an “Easy Bug Report” workflow to enable continuous improvement. In parallel, roll out Joule with structured error data for selected initial use cases to build trust.

  • Fundamental measures: Initiate and implement a cross‑team “Error Contract” to solve information loss across system boundaries and create the foundation for excellent error experiences.
     

Measurement concept

  • Instrumentation: Track deep‑link usage, help‑link clicks and AI interaction outcomes; tag relevant error types in ServiceNow.

  • A/B testing: Compare message variants for top error scenarios; primary metrics — self‑resolution rate, time‑to‑recovery, repeat tickets; test windows 2–4 weeks per variant.

  • Weekly monitoring of error‑ticket share, self‑resolution and time‑to‑recovery; qualitative feedback reviews for continuous iteration.

Results & lessons learned

This project produced a data-driven, validated, and technically grounded strategy that now serves as a prioritised backlog for implementation and will sustainably change how SBPA error messages are perceived.

Strategic orientation through proactive analysis
My main insight was that the biggest impact lay not in visual tweaks but in strategic preparation. I learned to use support data as a resource to quantify the business value of a latent problem and elevate it from a nuisance to a recognised, prioritised initiative.

Efficiency through dual validation
Parallel validation with users and developers was key to maximising the value of this research. It ensured the solution is both desirable and technically feasible, and it provided clear estimates of effort, feasibility and usability for items in the solution space.

UX as systemic diagnosis
This project showed me that poor user experience often signals a deeper systemic issue. I demonstrated that the UX role goes beyond surface design: we act as system diagnosticians who trace cause chains back to the architecture and initiate and moderate the necessary cross-discipline conversations.

Trust in AI as an outcome, not a requirement
The marked AI scepticism among experts underlined that trust is an outcome, not an input: you earn it through incremental, demonstrably useful use cases that give users control and transparently show how the AI reached its recommendation.

systematic.png
accuracy.png
ui.png
microchip.png
bottom of page