ux research

Research for the mPower App for Parkinson's Disease

Project Overview

  • Role: user researcher and designer

  • Methods: interviews, storyboards, prototyping

  • Deliverables: internal report, presentations, mockups, publication

  • Tools: Invision app, pen and paper, Figma

  • Client: Sage Bionetworks

How can we get people to track symptoms for a progressive disease?

Parkinson’s Disease gets worse over time, but the rate at which symptoms change varies quite a bit. To study symptom progression, Sage Bionetworks researchers wanted to create a mobile app that people with Parkinson’s could use to measure their symptoms over time.

To get good data, the Sage team needed an app that would provide value back to users, to keep them using it. My job was to figure out what the app needed to do to provide value to users.

Research Goals

I needed to understand:

  • What is the value to users in tracking symptoms for a progressive disease?

  • What drives people away from tracking symptoms for a progressive disease?

  • What obstacles do people face in trying to track their symptoms?

The process

After discussions with stakeholders about project goals, I interviewed people with Parkinson’s Disease to understand their needs for symptom tracking and managing life with the disease.

  • Recruitment: I wanted to recruit people with Parkinson’s Disease, distributed based on how long they’d been living with the disease, and including both people who already tracked symptoms and people who didn’t. Recruiting was challenging, so I recruited through personal networks, professional networks, participants in past studies, and every other channel I could think of.

    • Since early interviews showed that care partners are often key stakeholders, I expanded the target population to include care partners and interviewed them, recruiting through snowball sampling and other channels.

    • In all, I included 17 people with Parkinson’s and 6 care partners.

  • Methods: Interviews and early prototype studies.

    • Interviews with storyboards: After the first four interviews there were some themes arising already that could point us toward very different use cases, so I drafted some storyboards to understand the relative value and issues around four specific use cases. The storyboards helped me focus the conversations on the relative value of different features.

    • Prototyping: After about half the interviews I started sketching early prototypes and developing them. I used the remaining user sessions to get feedback on the prototypes and develop the designs.

  • Analysis and reporting: I did quick and dirty analysis during the project and rigorous thematic analysis afterwards.

    • Quick and dirty: I discussed emergent themes regularly with the UX team and with a larger group of cross-functional stakeholders. As I began prototyping, I got my team together and we did some post-it note affinity diagramming on the whiteboard.

    • Rigorous thematic analysis: I cleaned all the interview transcripts, analyzed them in atlas.TI and wrote a CHI paper.

  • Deliverables: Ultimately I delivered a report on the project, a list of design tensions and user needs, a couple of powerpoint presentations (given to different audiences), and a scientific publication accepted at CHI.

A crop of one of the storyboards used in interviews. Shows a person looking back over their Tap Test data and comparing their results over time to the average for all people with Parkinson’s Disease. The speech bubble is empty so that interview part…

A crop of one of the storyboards used in interviews. Shows a person looking back over their Tap Test data and comparing their results over time to the average for all people with Parkinson’s Disease. The speech bubble is empty so that interview participants could fill it in themselves.

Key findings

  • Users wanted to track their symptoms so that they could identify triggers that affected their symptoms and talk to their doctor about their medication (plus some other reasons).

  • Not everyone wanted to track their symptoms, because they didn’t want to see data showing their symptoms getting worse. However, some of these users still wanted to contribute data to science.

  • Care partners are critical stakeholders, but people with Parkinson’s differ in how they want care partners involved (e.g., as recipients of data, as co-symptom trackers, or not at all).

  • Users experience a variety of symptoms - some can be tracked through tests in an app, and some are very individual and personal.

  • There were some usability issues with some of the endpoints used to measure symptoms.

Translating findings to design

  • Since not everyone wants to track symptoms, the homepage should prioritize the actions of tracking rather than data visualization.

  • Users should be able to track more than symptoms—they should also be able to track things like triggers and medication to see what affects their symptoms.

  • Users should be able to see data over long and short time periods.

  • Users need to be able to track different kinds of symptoms in different ways.

  • Care partners could not be included as users due to feasibility concerns.

Let users track more than symptoms

Users wanted to be able to track how meds and triggers affected their symptoms.

Track symptoms in more than one way

Some symptoms could be measured, some could be rated, and some could only be noted. Users needed to track them all.

What was my impact?

  • I defined the main functions of the mPower app for users.

  • I translated my findings into mockups that I handed to the design team, and worked with them to prioritize features since not all could be developed.

  • I demonstrated that care partners were important stakeholders in symptom tracking. Since including multiple users increased the complexity of the system considerably, we did not include them as users of the app. However, this finding gives us insight into future directions.

Notes and Acknowledgments

This work could not have been done without the amazing people at Sage Bionetworks. To name a few: Woody MacDuffie, Stockard Simon, Michael Kellen, Larsson Omnberg, Lara Mangravite, and many others. In addition, this work could not have gone forward without my participants. I thanked you when you participated, and I thank you again here.

I also want to note that one thing I learned in this work was that if you’ve talked to one person with Parkinson’s, you’ve talked to one person with Parkinson’s. The needs of this community are extremely diverse, and while my findings reflected the needs of some users, they definitely do not represent the needs of every person with Parkinson’s.

Speak Up! Theory-Based Prototypes for Hospitalized Patients

Project Overview

  • Role: user researcher, strategist, designer

  • Methods: survey, interviews, prototyping

  • Deliverables: internal report, public presentations

  • Tools: sketch

  • Sponsor: AHRQ (Agency for Healthcare Research and Quality)

How can we motivate hospitalized patients to speak up about their concerns?

Patients in the hospital don’t always speak up about their concerns, even when there is a risk of medical errors and their health could be impacted. Getting them to speak up can improve patient safety by preventing medical errors.

Most of the interventions in this space are directed at clinicians: getting clinicians to speak up to other clinicians about protocol violations. But there isn’t much knowledge about how to help patients speak up to clinicians. My job was to figure out what kinds of design strategies we should pursue.

I decided to approach this as a behavior change problem, and ground my approach in both scientific literature and user studies.

Research Goals

I needed to understand:

  • What kinds of interventions does behavioral science tell us to make?

  • What kinds of design approaches can we take to use that behavioral science, and how do we know they are accurately representing the science (“theoretical fidelity”)?

  • Which of these approaches are most likely to work with patients?

The process

  • Designing theory-based prototypes: First I turned to behavioral science literature, and found the Integrated Behavioral Model. This model names several constructs that contribute to behavior change — things like how confident someone feels in their ability to speak up (self-efficacy), whether they feel they can get opportunities to speak up (perceived control), whether they think other patients speak up (descriptive norms), and others. I designed several prototypes that illustrated different design strategies for each motivational construct in the model.

  • Validating the prototypes: To validate the theoretical fidelity of the prototypes, I conducted a survey asking experts to map the prototypes to the right construct.

    • Unexpected challenge: getting validation was harder than I expected. I did some additional interviews with experts to make sure my validation method was working as it should, and did two rounds of prototyping and survey validation.

Example low-fidelity prototypes used in the interviews. One is an automated tracker of how often other patients in the hospital speak up, the other is a system where patients can anonymously share their stories of speaking up.

Exploring multiple design strategies for each motivational construct yields more insights

These cards show two very different strategies for the same motivational construct of “descriptive norms”. Both illustrate a way to normalize speaking up for patients. But boy did they provoke different reactions!

  • Prototype studies with patients and caregivers in the hospital: I took the prototypes with me and interviewed patients and caregivers while they were in the hospital about their experiences and their interpretations of the validated prototypes. Patients are diverse, and so was my participant pool: I recruited from adult and pediatric hospitals, including patients with a wide variety of conditions, as well as family caregivers helping the patient. In total I interviewed 28 participants.

  • Analysis and reporting: I discussed emergent themes with stakeholders as I went, modifying the interview protocol slightly to push on important themes. I conducted a rigorous thematic analysis in atlas.TI and reported my findings in internal presentations and reports as well as in a presentation at the prestigious international conference of CHI 2020.

Key Findings

  • There are MANY unexplored opportunities to encourage patients to speak up about their care. All of my prototypes were both novel and well-received by patients.

  • Normalizing speaking up and helping patients feel more confident in doing so can both be achieved successfully by in-hospital peer support systems — in other words, tools for patients to share stories and tips with each other. (Don’t worry, other work that I collaborated on explores how to do this well, without creating misinformation or antagonism towards clinicians). This approach may be more effective than other approaches.

  • Creating shared agendas can help patients create opportunities to speak up, but patients need to be confident that their interruptions and additions are welcome to clinicians. Otherwise, patients worry about being rude.

  • Theoretical fidelity is a lot harder to achieve than you think.

What was my impact?

  • I identified new opportunities for encouraging patients to speak up to promote patient safety.

  • I identified top-performing intervention strategies that would be highest priority to develop and test.

  • I identified issues with establishing theoretical fidelity of interventions — a complex problem for science!

Acknowledgments

This work could not have gone forward without support from our funders (thank you AHRQ!) and without the collaboration of my excellent colleagues: Wanda Pratt, Shefali Haldar, Ari Pollack, and other members of the Patients as Safeguards research team. Thank you for your support!

Priming Affective Rewards to Encourage Exercise

Project Overview

  • Role: user researcher, designer

  • Methods: surveys, experiment (micro-randomized trial), interviews

  • Deliverables: reports and publications, intervention design

  • Tools: TextMagic, Qualtrics

To get people to exercise more, we need to change their attitudes

Exercise is great for your health, but it can be hard for people to get moving. There are lots of interventions out there to help you set goals and track progress, but these devices get abandoned all the time. What’s an intervention designer to do?

I wanted to see if we can change people’s attitudes towards exercise — essentially, to help people like it more so that they do it more often, with or without a Fitbit. Attitudes are a key part of behavioral science models and are important predictors of exercise behavior.

But attitudes are hard to change. That’s partly because vigorous exercise is hard—not everyone enjoys feeling gross and sweaty and out of breath. But what about afterwards, when you’re done? Many people feel better after they exercise, but so far exercise interventions haven’t harnessed that affective (mood) boost. My job was to design and evaluate an intervention to improve attitudes towards exercise.

Image from https://images.app.goo.gl/LtKfzo65SrdxCBCr6

Can the right text message at the right time change how you feel about exercise?

Maybe! Ask me about preliminary results!

The process

  • Concept validation: I conducted a survey as concept validation for an attitudinal intervention based on affective rewards—the good feelings you get after you exercise. The survey suggested that an intervention based on affective rewards might work, but it needed to be highly customizable to individuals. (Publication: “Move into another world of happy,” Pervasive Health 2017).

  • Intervention design: I needed an intervention that was lightweight and scalable while at the same time being highly personalizable.

    • For a zero development load and scalability, I decided on text messages: a simple and readily accessible format that could reach people at different times.

    • To maximally personalize the intervention, I had participants write the text messages themselves. (I piloted the format of the prompts iteratively on MTurk to make sure participants could write good messages.)

    • To further explore the design space, I included reflective prompts that participants must respond to.

  • Experimental evaluation: I conducted a micro-randomized trial to test the intervention. (The protocol has been registered as a clinical trial and published in JMIR Research Protocols.)

    • Piloting: I piloted it first, twice actually, to work out the kinks—were the texts being delivered on time? Did the surveys that were sent during the deployment make sense? Etc.

    • Trial design: I opted for a micro-randomized trial. This is a within-subjects trial design that allows you to get more power with less participants, and draw robust inferences about causality.

    • Participants: I opened it to the general US adult population (anyone 18+ living here) because the process I’m using to change attitudes — affective rewards — isn’t restricted to any particular subpopulation.

      • I restricted my participants to people with an Apple Watch, rather than a Fitbit. That was because Apple Watch has a bigger marketshare, and because comparing data from different devices is more complicated than restricting to a single device type. But yes, it does introduce a bias.

    • Outcomes: I administered pre- and post- measures about attitudes towards exercise. I also collected data from participants’ Apple Watches, specifically calories burned and stepcount. I included calories burned because I wanted to allow people to do whatever exercise they got the most joy out of, including things like aerobics or dance that might burn more calories than the stepcount suggests.

  • Experience evaluation: I care about the experience of the intervention as well as the effect. I am evaluating experience through surveys and interviews.

    • Closing surveys: I’m using closing surveys to get data on what kinds of text messages users prefer (the ones they wrote or the reflective prompts), whether the frequency of texts are right, and some other topics.

    • Exit interviews: I’m also doing exit interviews to get rich data on how the intervention impacted people’s lives.

  • Analysis: Analysis is in progress, including both quantitative and qualitative data. Reporting will follow, but I’ve already been discussing preliminary results and themes from the qualitative data with my stakeholders. To rapidly analyze the qualitative interview data, I’ve been making tables to capture key similarities and differences between participants.

    • Unexpected challenge: There is a lot of missing data. During the study I was monitoring to make sure people were sharing data, but even with very proactive monitoring and outreach there is a high missingness rate that I’ll be dealing with in the analysis.

Key Findings

Stay tuned for these to be posted here, but I can tell you about preliminary results verbally!

What will the impact be?

  • The findings from this study will tell us whether we can change attitudes towards exercise by reminding them of the affective rewards of exercise at the right time.

  • The findings from this study will also tell us something about how we should design such an intervention — what’s the right kind of message and what’s the right dose/frequency with which to send them?

Acknowledgments

This project could not have happened without my esteemed colleague, Pedja Klasnja. Thank you so much for your help and support in this project!