top of page

Passive data collection via mobile phone: Effortless Assessment of Risk States (E.A.R.S.) tool

The EASE (Effortless Assessment of Stressful Environments) Study: Assessing stress with mobile passive sensing behavioral data

Michelle L. Byrne, Nicholas B. Allen, Monika N. Lind

The University of Oregon, Eugene, OR

​

Introduction

Passive mobile phone sensing data, which is collected on an ongoing basis, may be able to examine the effect of stressors. It also has the advantage of measuring objective, naturalistic behavioral data. This could improve upon self-reported questionnaires, which may suffer from response- or interpretation-bias. New technology can measure behaviors that are known to be associated with stress, including language use, acoustic voice characteristics, and emotional facial expressions. This may be especially important in the academic examination paradigm, which involves a within-subjects design that compares variables of interest during a low-stress period to variables measured during final exams. Reliable effects of the examination period have been reported on self-reported perceived stress, mental health symptoms, and immune markers. However, studies using this paradigm typically only measure variables once or twice, and compare these to a single baseline measurement. This is unlikely to account for the effect of both the acute (right before or during each exam) and prolonged (study time leading up to the exams) stressors during final examinations. Passive sensing may be one way to overcome this methodological challenge. Furthermore, many of these studies do not take cumulative lifetime stress into account, even though number of stressful life events and chronic difficulties are reliable predictors of mental health outcomes.

​

Research Question and/or Specific Aims

Aim 1: To validate objective behavioral indicators from passive mobile sensing in terms of their relationship to known correlates of stress such as perceived stress, mental health symptoms, and biological markers. Aim 2: To understand how these behaviors change within subjects using an examination stress paradigm.

​

Hypotheses

Hypothesis 1: Passive mobile sensing data behaviors will correlate with known measures of stress. Specifically, negative words and stress acoustic voice properties will be positively associated, and positive words, number of smiles, and amount of physical activity will be negatively associated, with self-reported stress and mental health symptoms, lifetime stress, and elevated inflammation.

Hypothesis 2: Passive mobile sensing data behaviors will change within individuals across a baseline and high-stress week.

​

Methods

We assessed 25 undergraduate students, aged 18+ years (mean age = 18.35, S.D. = 2.75); 24 of these completed both baseline and follow-up weeks. We restricted participation to individuals that already used Android mobile phones with a forward-facing camera, and did not report having or regularly taking medication to treat immunological disorders. We collected week-long sets of passive sensing data, twice. The baseline assessment occurred 3-7 weeks before the student’s first final exam. The follow-up assessment occurred during the week prior to the student’s last final exam.

​

The Effortless Assessment of Risk States (E.A.R.S.) tool is a suite of programs installed on the user’s mobile phone that collect naturalistic behavior. In this version of E.A.R.S: 1) A keyboard logger collected naturalistic and passive text input across all applications. We buffered logs of this text so that we only recorded every 3rd word. Therefore we only collected aggregate counts of words (not semantics) to protect participant privacy. We assigned sentiment (positive, negative, neutral) based on the “bing” lexicon classifiers with R and calculated absolute total number of words, total number of positive words, and total number of negative words. 2) A video diary prompt appeared every evening during each of the two assessment weeks. It instructed participants to state their name, date, time, and the weather, describe one thing that happened that day that was positive, and one thing that was negative. From this data, we extracted acoustic voice and facial expression data, although only facial expression data has been analyzed. We calculated smile (Action Unit (AU) 12) intensity using the software OpenFace. 3) Minutes of physical activity for four different categories (active, walking, driving, and still) were obtained from the Google Fit API only for a subset of participants (N = 13).

​

Participants completed in-lab computer questionnaires of perceived stress and mental health symptoms on the last day of each assessment week. These questionnaires included the Subjective Stress in Context (SSIC), the Depression Anxiety Stress Scales (DASS), the Perceived Stress Scale (PSS), the Pittsburgh Sleep Quality Index (PSQI), the Physical Health Questionnaire (PHQ), and the International Physical Activity Questionnaire (IPAQ). We also assessed exposure to cumulative life stress using the Stress and Adversity Inventory (STRAIN). STRAIN is a secure, online stress assessment system that measures individuals’ lifetime exposure to different types of acute and chronic stress that can affect mental and physical health. Participants completed this measure only once, on the last day of the baseline week-long assessment. Currently, only results for the DASS and average number of hours slept from the PSQI are available for analysis.

​

We collected saliva samples to test for levels of C-reactive protein (CRP), a general pro-inflammatory marker, and secretory immunoglobulin A (SIgA), a measure of immune activation or competence. Participants contributed approximately 2 mL of saliva, twice, at the end of each assessment period, at the end of each visit to ensure that participants did not eat, drink, smoke, or chew gum for the 20-30 minutes before giving the sample. Samples were stored in a -20°C freezer for less than 48 hours before being transferred directly to a -80°C freezer. Approximately 2-7 months later, samples were shipped on dry ice to Iowa State University, where they are currently stored and will be assessed for CRP and SIgA with enzyme-linked immunosorbent assays (ELISA) assays over the next couple of months.

Finally, participants reported date of birth, biological sex, gender identity, race/ethnicity, and parents’ education and income at Visit 1. We also measured height and weight.

​

Results

Twelve (48%) out of the 25 participants identified their biological sex and gender identity as female and 13 (52%) identified as male. 12% were Asian, 64% Caucasian, 12% Hispanic, and 12% Multiracial. The mean yearly pre-tax income of participants’ parents was $88,625.00 ± 62,009.69, and mean participant BMI was 25.82 kg/(m2) ± 4.72.

​

Of the 24 participants that completed both baseline and follow-up, 22 are currently available for text analysis (the other two will be available soon). During Baseline, higher total number of positive words were positively associated with higher scores on the DASS Stress Subscale (r=0.459, p=0.032) and negatively correlated with PSQI hours slept (r=-0.478, p = 0.024), contrary to expectations. However, total number of negative words was also negatively correlated with PSQI hours slept (r=-0.583, p=0.004), suggesting that total affective words or ratios to overall words may be more important than the valence, and we will examine this next. Total words and ratios did not change significantly between weeks.

​

As of yet, we have only conducted analyses on AU12 (smiles) collapsed across baseline and follow-up weeks (Hypothesis 1). Average intensity of smiles in the video diaries was positively associated with the Anxiety (F(1,46)=8.9018, p=0.005) and Stress (F(1,44)=7.511, p=0.009) DASS subscales, again contrary to expectations. There were no significant associations with the Depression subscale.

Active, walking, and driving time assessed from the Google Fit (phone) data was not associated with DASS symptoms at Baseline or Follow-up, but interestingly, “still” time was negatively associated with the anxiety subscale during the Baseline week (r=-0.592, p=0.033). No physical activity behaviors changed from Baseline to Follow-up except an increase in driving behavior (t=2.348, p=0.037).

​

Conclusion and Future Directions

The development of the EARS tool and the pilot data from this study on stress have shown that passive sensing via mobile phones is a feasible way to collect naturalistic, observed behaviors that appear to be significant markers of stress and mental health. The finding that smile intensity was positively associated with stress and anxiety was contrary to expectations and may be indicative of the less passive nature of the video diary, and social/performance anxiety may have been a factor. Future planned analyses from this dataset include other facial action units that may be associated with stress and mental health symptoms, such as disgust. We will also examine acoustic voice properties known to be associated with depression. Some results from text data were also contrary to expectations, where stress and less sleep were positively associated with positive words; however, less sleep was also associated with more negative words. This may have more to do with general phone usage rather than affective language use specifically, and future analyses will also examine ratios of affective words to total words. Other classifiers or machine learning techniques on natural language processing may also shed light on more subtle associations, but we are limited in the current study by having only every 3rd word. Future versions of EARS will include the ability to record every word. They will also collect forward facing camera photos to analyze more passive facial expressions (i.e., not prompted). “Still” time assessed by Google Fit was negatively associated with anxiety symptoms, suggesting that in this paradigm, sedentary behavior may not be a marker of poor mental health; however, these analyses were limited to a very small number of participants. Finally, we have yet to test associations between passive sensing behaviors and other known correlates of stress, such as self-reported perceived stress from the PSS or SSIC, lifetime cumulative stress from the STRAIN, immune functioning, and self-reported physical health.

​

How can other researchers use these findings to inform their own work exploring the concept of ‘stress’?

We suggest that other researchers interested in measuring naturalistic stress behavior use the EARS tool in their own research studies, especially because it generates a large amount of what we call “individual big data”. This pilot project provided proof of concept and showed that passive behaviors are associated with known correlates of stress even in a small sample size; therefore, larger studies may be able to explore more complex associations.

​

How will this experience alter the way in which you approach studying and measuring ‘stress’?

This study and the development of the EARS tool was a catalyst for the establishment of the new Center for Digital Mental Health at the University of Oregon (www.c4dmh.net). It allowed us to take part in the Obama White House’s Opportunity Project in August 2016, a technology development “sprint” to create new open digital tools to help communities (tinyurl.com/y75pla5m). We plan to use this pilot data for a future grant application for an intervention study that connects clinicians to changes in the passive mobile data of individuals at risk for mental health problems. Overall, we believe that this pilot study has justified the use of this tool in our future research studies and interventions. With it, we can use observed measures of stress for better prediction and prevention of mental health crises.

​

Submitted July 15, 2017

bottom of page