Priming Affective Rewards to Encourage Exercise

Project Overview

  • Role: user researcher, designer

  • Methods: surveys, experiment (micro-randomized trial), interviews

  • Deliverables: reports and publications, intervention design

  • Tools: TextMagic, Qualtrics

To get people to exercise more, we need to change their attitudes

Exercise is great for your health, but it can be hard for people to get moving. There are lots of interventions out there to help you set goals and track progress, but these devices get abandoned all the time. What’s an intervention designer to do?

I wanted to see if we can change people’s attitudes towards exercise — essentially, to help people like it more so that they do it more often, with or without a Fitbit. Attitudes are a key part of behavioral science models and are important predictors of exercise behavior.

But attitudes are hard to change. That’s partly because vigorous exercise is hard—not everyone enjoys feeling gross and sweaty and out of breath. But what about afterwards, when you’re done? Many people feel better after they exercise, but so far exercise interventions haven’t harnessed that affective (mood) boost. My job was to design and evaluate an intervention to improve attitudes towards exercise.

Image from https://images.app.goo.gl/LtKfzo65SrdxCBCr6

Can the right text message at the right time change how you feel about exercise?

Maybe! Ask me about preliminary results!

The process

  • Concept validation: I conducted a survey as concept validation for an attitudinal intervention based on affective rewards—the good feelings you get after you exercise. The survey suggested that an intervention based on affective rewards might work, but it needed to be highly customizable to individuals. (Publication: “Move into another world of happy,” Pervasive Health 2017).

  • Intervention design: I needed an intervention that was lightweight and scalable while at the same time being highly personalizable.

    • For a zero development load and scalability, I decided on text messages: a simple and readily accessible format that could reach people at different times.

    • To maximally personalize the intervention, I had participants write the text messages themselves. (I piloted the format of the prompts iteratively on MTurk to make sure participants could write good messages.)

    • To further explore the design space, I included reflective prompts that participants must respond to.

  • Experimental evaluation: I conducted a micro-randomized trial to test the intervention. (The protocol has been registered as a clinical trial and published in JMIR Research Protocols.)

    • Piloting: I piloted it first, twice actually, to work out the kinks—were the texts being delivered on time? Did the surveys that were sent during the deployment make sense? Etc.

    • Trial design: I opted for a micro-randomized trial. This is a within-subjects trial design that allows you to get more power with less participants, and draw robust inferences about causality.

    • Participants: I opened it to the general US adult population (anyone 18+ living here) because the process I’m using to change attitudes — affective rewards — isn’t restricted to any particular subpopulation.

      • I restricted my participants to people with an Apple Watch, rather than a Fitbit. That was because Apple Watch has a bigger marketshare, and because comparing data from different devices is more complicated than restricting to a single device type. But yes, it does introduce a bias.

    • Outcomes: I administered pre- and post- measures about attitudes towards exercise. I also collected data from participants’ Apple Watches, specifically calories burned and stepcount. I included calories burned because I wanted to allow people to do whatever exercise they got the most joy out of, including things like aerobics or dance that might burn more calories than the stepcount suggests.

  • Experience evaluation: I care about the experience of the intervention as well as the effect. I am evaluating experience through surveys and interviews.

    • Closing surveys: I’m using closing surveys to get data on what kinds of text messages users prefer (the ones they wrote or the reflective prompts), whether the frequency of texts are right, and some other topics.

    • Exit interviews: I’m also doing exit interviews to get rich data on how the intervention impacted people’s lives.

  • Analysis: Analysis is in progress, including both quantitative and qualitative data. Reporting will follow, but I’ve already been discussing preliminary results and themes from the qualitative data with my stakeholders. To rapidly analyze the qualitative interview data, I’ve been making tables to capture key similarities and differences between participants.

    • Unexpected challenge: There is a lot of missing data. During the study I was monitoring to make sure people were sharing data, but even with very proactive monitoring and outreach there is a high missingness rate that I’ll be dealing with in the analysis.

Key Findings

Stay tuned for these to be posted here, but I can tell you about preliminary results verbally!

What will the impact be?

  • The findings from this study will tell us whether we can change attitudes towards exercise by reminding them of the affective rewards of exercise at the right time.

  • The findings from this study will also tell us something about how we should design such an intervention — what’s the right kind of message and what’s the right dose/frequency with which to send them?

Acknowledgments

This project could not have happened without my esteemed colleague, Pedja Klasnja. Thank you so much for your help and support in this project!