Journey mapping ​for IBM AI on Z

Sep 2021 - Feb 2022

Overview

AI on IBM Z embeds artificial intelligence ​directly into IBM Z software and hardware ​through a suite of 13+ products. The AI on Z ​team was comprised of product managers, ​engineers and designers, all tasked to ​maintain existing products and roll new ​offerings out as well. The design team ​working on AI on Z historically utilized ​research practices to evaluate and ideate ​product capabilities. Being new to the team, ​I was eager to integrate service design ​practices to understand end-to-end client ​journeys and where pain points lay to ​ultimately help the team prioritize areas to ​improve upon.

The "Journey to AI on Z" landing page

My contributions

  • Research planning
  • Secondary research
  • Competitive research
  • Competitive analyses
  • Journey map / user ​flow diagrams

The team

  • Design lead
  • Product manager
  • Service designer / user researcher (me!)

Process

Service blueprinting

To start this initiative, I decided to create a high-level service blueprint of the AI on Z suite of products to document ​both what our clients and our team were experiencing. I relied on my teammates to walk me through the project ​context I was working on as well as value statements, personas, objectives, touch points, and processes. The service ​blueprint was a culmination of all of this information, visualized in a “discover, learn, try, buy” format. Focusing on ​these four beginning stages allowed me to keep the blueprint at a high-level; any stage past buy (adopt, use, get ​help, etc) would be product specific, whereas I wanted to focus on the suite-level.

Discover, learn, try and buy stages for AI on Z.

After creating a strategic service blueprint, I conducted a team exercise in which I asked each member to individually ​document what information they found useful, what questions they had, and where clarity is needed on the front-​stage. This team activity helped me prioritize which stage to focus my next phase of work on.


From the activity, the try stage had numerous questions and requests for additional clarity. This phase, compared to ​discover, learn and buy, seemed the most convoluted to my team as well as the stage that had the fewest amount of ​touch points. I decided to investigate the try phase further by conducting competitive research as well as a deep dive ​exploration into what trial experiences our team offered.

Team activity to help determine which stage to focus on going forward.

Competitor research

My objective for this competitive ​research phase was to understand ​competitive standards for trial ​experiences and services. I started this ​by first compiling a list of our closest ​competitors, reviewing their products, ​trials, and services available through their ​digital touch points, and finally compiling ​journey maps. I utilized Mural to take ​screenshots of their product pages, trial ​experiences and loose user flows. This ​phase of research was ultimately limited ​by my ability to only utilize digital touch ​points for exploration.

A few competitor processes for trial experiences
Trial overview chart for Google, specifying their trial experiences.

After I gathered as much data as I could, I ​then synthesized this information. I ​analyzed the journey maps I created by ​comparing them against one another to ​identify common themes. I then created a ​trial overview chart for each competitor ​to present to my team which summarized ​specific trial experiences within that ​competitor.

Lastly, I created a competitor analysis ​which compared all competitor trial and ​service experiences.

Trial competitor analysis summary chart, summarizing each competitor's trial experience.

From these insights, I created “trial standards” which articulated what a “high standard” vs a “low standard” looks ​like for trials. High standards included free trials that lasted longer than 90 days, hands-on or free-reign experience ​and clear next steps to upgrade the account or engage with a sales team. The low standard had no trials or a paid ​trial with a limited number of days, a limited amount of interaction one could take in the trial (for example, could be ​specific to a provided use case) and unclear steps to take after the trial ended.


Defining standards would allow me to grade IBM’s trial experiences based on industry practices.

Trial standards I developed based on patterns in competitor trials

As-is research

After understanding what ​the competitor trial ​experience looked like, I ​needed to look at what AI ​on IBM Z was providing ​for our trials. I looked at ​six trial experiences and ​created process maps ​based on my team’s ​knowledge of these ​experiences.

Process maps for IBM AI on Z trials

Similar to the competitor ​research, I created a trial ​experience overview chart ​to summarize all of our trial ​experiences.

Summary chart for IBM AI on Z trial experiences, similar to the competitor charts

Synthesis

Once I had a firm understanding of ​our competitors’ trial experiences ​through charts and processes, the ​defined trial standards for low and ​high experiences, and IBM’s current ​trial processes, I was able to ​compare IBM’s experiences to our ​competitors. I gave each competitor ​and IBM a grade on the low to high ​standard experience scale, with IBM ​falling on the medium to low side of ​experiences. I presented this ​information to my team of product ​managers and engineers to highlight ​the necessity to improve our trials.

The final competitor analysis chart, complete with each competitor’s summary and “experience grade”, compared against IBM

Next steps

Research plan overview, including both competitor and “as is” AI on Z trial deep dive

With any good design process, there needs to be contact with users. My next steps for this work were validation ​research and ideation. Validating the high and low experience standards that I defined as well as the AI on Z trial ​experiences would prove or disprove my research. Ultimately, the outcomes of this research would be validated AI ​on IBM Z trial customer journey maps and a solid understanding of pain points with these experiences. Ideation ​would follow to meet or exceed competitor experiences, and most importantly, negate any pain points currently ​experienced. Prioritizing ideas based on feasibility as well as impact to the user is critical, as well as articulating a ​comprehensive plan on who and how these ideas would be carried out.


Due to a change in teams, I was pulled off of the project before I could complete these next steps.

Conclusion

After creating a service blueprint and prioritizing the try phase, this project proved to be a great use case for ​competitor research. I learned a lot about our competitors and how to carry out secondary competitor ​research.


If possible, it would have been ideal to somehow interact with our competitors’ non-digital touch points to fully ​grasp their trial experiences. If time allowed, I would have liked to carry out client research to validate my ​experience standards and understand how clients engage with our trials.