behavioral science

Speak Up! Theory-Based Prototypes for Hospitalized Patients

Project Overview

  • Role: user researcher, strategist, designer

  • Methods: survey, interviews, prototyping

  • Deliverables: internal report, public presentations

  • Tools: sketch

  • Sponsor: AHRQ (Agency for Healthcare Research and Quality)

How can we motivate hospitalized patients to speak up about their concerns?

Patients in the hospital don’t always speak up about their concerns, even when there is a risk of medical errors and their health could be impacted. Getting them to speak up can improve patient safety by preventing medical errors.

Most of the interventions in this space are directed at clinicians: getting clinicians to speak up to other clinicians about protocol violations. But there isn’t much knowledge about how to help patients speak up to clinicians. My job was to figure out what kinds of design strategies we should pursue.

I decided to approach this as a behavior change problem, and ground my approach in both scientific literature and user studies.

Research Goals

I needed to understand:

  • What kinds of interventions does behavioral science tell us to make?

  • What kinds of design approaches can we take to use that behavioral science, and how do we know they are accurately representing the science (“theoretical fidelity”)?

  • Which of these approaches are most likely to work with patients?

The process

  • Designing theory-based prototypes: First I turned to behavioral science literature, and found the Integrated Behavioral Model. This model names several constructs that contribute to behavior change — things like how confident someone feels in their ability to speak up (self-efficacy), whether they feel they can get opportunities to speak up (perceived control), whether they think other patients speak up (descriptive norms), and others. I designed several prototypes that illustrated different design strategies for each motivational construct in the model.

  • Validating the prototypes: To validate the theoretical fidelity of the prototypes, I conducted a survey asking experts to map the prototypes to the right construct.

    • Unexpected challenge: getting validation was harder than I expected. I did some additional interviews with experts to make sure my validation method was working as it should, and did two rounds of prototyping and survey validation.

Example low-fidelity prototypes used in the interviews. One is an automated tracker of how often other patients in the hospital speak up, the other is a system where patients can anonymously share their stories of speaking up.

Exploring multiple design strategies for each motivational construct yields more insights

These cards show two very different strategies for the same motivational construct of “descriptive norms”. Both illustrate a way to normalize speaking up for patients. But boy did they provoke different reactions!

  • Prototype studies with patients and caregivers in the hospital: I took the prototypes with me and interviewed patients and caregivers while they were in the hospital about their experiences and their interpretations of the validated prototypes. Patients are diverse, and so was my participant pool: I recruited from adult and pediatric hospitals, including patients with a wide variety of conditions, as well as family caregivers helping the patient. In total I interviewed 28 participants.

  • Analysis and reporting: I discussed emergent themes with stakeholders as I went, modifying the interview protocol slightly to push on important themes. I conducted a rigorous thematic analysis in atlas.TI and reported my findings in internal presentations and reports as well as in a presentation at the prestigious international conference of CHI 2020.

Key Findings

  • There are MANY unexplored opportunities to encourage patients to speak up about their care. All of my prototypes were both novel and well-received by patients.

  • Normalizing speaking up and helping patients feel more confident in doing so can both be achieved successfully by in-hospital peer support systems — in other words, tools for patients to share stories and tips with each other. (Don’t worry, other work that I collaborated on explores how to do this well, without creating misinformation or antagonism towards clinicians). This approach may be more effective than other approaches.

  • Creating shared agendas can help patients create opportunities to speak up, but patients need to be confident that their interruptions and additions are welcome to clinicians. Otherwise, patients worry about being rude.

  • Theoretical fidelity is a lot harder to achieve than you think.

What was my impact?

  • I identified new opportunities for encouraging patients to speak up to promote patient safety.

  • I identified top-performing intervention strategies that would be highest priority to develop and test.

  • I identified issues with establishing theoretical fidelity of interventions — a complex problem for science!

Acknowledgments

This work could not have gone forward without support from our funders (thank you AHRQ!) and without the collaboration of my excellent colleagues: Wanda Pratt, Shefali Haldar, Ari Pollack, and other members of the Patients as Safeguards research team. Thank you for your support!

Priming Affective Rewards to Encourage Exercise

Project Overview

  • Role: user researcher, designer

  • Methods: surveys, experiment (micro-randomized trial), interviews

  • Deliverables: reports and publications, intervention design

  • Tools: TextMagic, Qualtrics

To get people to exercise more, we need to change their attitudes

Exercise is great for your health, but it can be hard for people to get moving. There are lots of interventions out there to help you set goals and track progress, but these devices get abandoned all the time. What’s an intervention designer to do?

I wanted to see if we can change people’s attitudes towards exercise — essentially, to help people like it more so that they do it more often, with or without a Fitbit. Attitudes are a key part of behavioral science models and are important predictors of exercise behavior.

But attitudes are hard to change. That’s partly because vigorous exercise is hard—not everyone enjoys feeling gross and sweaty and out of breath. But what about afterwards, when you’re done? Many people feel better after they exercise, but so far exercise interventions haven’t harnessed that affective (mood) boost. My job was to design and evaluate an intervention to improve attitudes towards exercise.

Image from https://images.app.goo.gl/LtKfzo65SrdxCBCr6

Can the right text message at the right time change how you feel about exercise?

Maybe! Ask me about preliminary results!

The process

  • Concept validation: I conducted a survey as concept validation for an attitudinal intervention based on affective rewards—the good feelings you get after you exercise. The survey suggested that an intervention based on affective rewards might work, but it needed to be highly customizable to individuals. (Publication: “Move into another world of happy,” Pervasive Health 2017).

  • Intervention design: I needed an intervention that was lightweight and scalable while at the same time being highly personalizable.

    • For a zero development load and scalability, I decided on text messages: a simple and readily accessible format that could reach people at different times.

    • To maximally personalize the intervention, I had participants write the text messages themselves. (I piloted the format of the prompts iteratively on MTurk to make sure participants could write good messages.)

    • To further explore the design space, I included reflective prompts that participants must respond to.

  • Experimental evaluation: I conducted a micro-randomized trial to test the intervention. (The protocol has been registered as a clinical trial and published in JMIR Research Protocols.)

    • Piloting: I piloted it first, twice actually, to work out the kinks—were the texts being delivered on time? Did the surveys that were sent during the deployment make sense? Etc.

    • Trial design: I opted for a micro-randomized trial. This is a within-subjects trial design that allows you to get more power with less participants, and draw robust inferences about causality.

    • Participants: I opened it to the general US adult population (anyone 18+ living here) because the process I’m using to change attitudes — affective rewards — isn’t restricted to any particular subpopulation.

      • I restricted my participants to people with an Apple Watch, rather than a Fitbit. That was because Apple Watch has a bigger marketshare, and because comparing data from different devices is more complicated than restricting to a single device type. But yes, it does introduce a bias.

    • Outcomes: I administered pre- and post- measures about attitudes towards exercise. I also collected data from participants’ Apple Watches, specifically calories burned and stepcount. I included calories burned because I wanted to allow people to do whatever exercise they got the most joy out of, including things like aerobics or dance that might burn more calories than the stepcount suggests.

  • Experience evaluation: I care about the experience of the intervention as well as the effect. I am evaluating experience through surveys and interviews.

    • Closing surveys: I’m using closing surveys to get data on what kinds of text messages users prefer (the ones they wrote or the reflective prompts), whether the frequency of texts are right, and some other topics.

    • Exit interviews: I’m also doing exit interviews to get rich data on how the intervention impacted people’s lives.

  • Analysis: Analysis is in progress, including both quantitative and qualitative data. Reporting will follow, but I’ve already been discussing preliminary results and themes from the qualitative data with my stakeholders. To rapidly analyze the qualitative interview data, I’ve been making tables to capture key similarities and differences between participants.

    • Unexpected challenge: There is a lot of missing data. During the study I was monitoring to make sure people were sharing data, but even with very proactive monitoring and outreach there is a high missingness rate that I’ll be dealing with in the analysis.

Key Findings

Stay tuned for these to be posted here, but I can tell you about preliminary results verbally!

What will the impact be?

  • The findings from this study will tell us whether we can change attitudes towards exercise by reminding them of the affective rewards of exercise at the right time.

  • The findings from this study will also tell us something about how we should design such an intervention — what’s the right kind of message and what’s the right dose/frequency with which to send them?

Acknowledgments

This project could not have happened without my esteemed colleague, Pedja Klasnja. Thank you so much for your help and support in this project!