By Dan Powell
Ever since beginning my research career as an MSc Health Psychology student in 2008, I’ve been interested and excited by studying daily life using the Ecological Momentary Assessment (EMA) method. I pursued this interest throughout my PhD and now work with Aberdeen Health Psychology Group on a study of unhealthy snacking and sedentary behaviours in daily life. At the recent European Health Psychology Society (EHPS) conference in Cyprus, I was fortunate enough to convene a symposium on EMA methods and was struck by a continued and – I would suggest – growing interest in utilising the potential of EMA for Health Psychology research.
Now, I’ve heard on the grapevine that the NUIG Health Psychology Blog really does enjoy a good ‘5 Tips on…’ post. Previous excellent examples are here and here. Not wanting to disappoint, I’ve gone with the same approach for this post on EMA which I hope will be useful for newbies to these kinds of studies.
What is EMA?
Let’s start with terminology. Hopefully you will have heard of EMA. You may also have heard of Ambulatory Assessment or Experience Sampling and wondered about the difference between each. The key differences between these methods can be explained in one word: none. Each originated from a different research discipline, but has evolved over time to describe essentially the same thing. For simplicity’s sake I will be using the term EMA in this blog, and use EMA to describe the range of real-time assessment methods that repeatedly capture behaviour, symptoms, psychological processes, and/or physiological measures in everyday life.
Why do I need to know about EMA?
So let’s deal with the “Why Bother?” question. First, your data will have enhanced ecological validity: the extent to which you’re findings will generalise to the real world. You’re directly observing your predictors and outcomes in daily life, so you can give yourself a thumbs up there! Second, the method will minimise recall bias. The self-report ratings provided will be relatively close in time and context to the experience itself, so Hi-5 there too! These two important advantages are not to be sniffed at but, in my opinion, it is the opportunity to make the move into testing processes within individuals where the greatest potential exists. Between-individual data and analysis tend to look at the “Who?” research questions. EMA can explore research questions about Where, When, and Why phenomena – whether that be behaviour, symptoms, or anything really – vary and co-vary within individuals. An editorial in the BJHP by colleagues in Aberdeen stressed the need to evaluate the applicability of behavioural theory to within-individual change (Johnston & Johnston, 2013). Here, the authors caution against assuming that theories explaining differences between individuals will also explain change within individuals (see also Molenaar, 2004). This error of logic they describe is a kind of Ecological Fallacy. By analysing EMA data with multilevel modelling (more details later), you can focus on within-person relationships by essentially allowing each individual to act as their own control. This has the potential to contribute towards a better understanding of behaviour change applied to individuals.
5 Tips on carrying out an EMA study
Given fast-improving technological capabilities, I will assume you intend to use an electronic device to prompt and capture real-time data. If not, please consider this as a strong suggestion and a bonus tip! You will automatically get time-stamps for your data, have expanded design options (see Tip 4 below), and eliminate the need for extremely laborious data inputs from paper-pencil diaries.
TIP 1: Learn multilevel modelling (MLM)
Say goodbye to the usual one-row-per-participant dataset. An EMA dataset has an inescapable multilevel structure, with multiple real-time assessments “nested” within participants. This means multiple rows per participant and time to turn to MLM. Begin learning MLM now. Don’t wait until nearer the end of data collection as it takes a while to get the hang of it. The good news is that MLM reference books have become more and more accessible for the non-expert in recent years (see here or here as examples) and analysis can often still be performed in SPSS so it’s not necessary to learn new software (see here and here).
MLM will allow you to test the within-individual processes that would be lost by creating an average for each person (i.e., manipulating the data back to one-row-per-participant) ahead of more traditional analysis. Aggregating would simply be squandering the richness of your dataset – don’t do it! Detailing all the advantages of MLM for EMA studies would necessitate another post, but you may be persuaded simply by your new-found ability to hysterically “…laugh in the face of missing data” (Field, 2012, pg. 729). Disclaimer: There are still certain types of missing data that can’t be laughed at and ignored (see Black et al., 2012 or here) and you should always aim to limit missing data (see Tip 3).
TIP 2: Consider the burden on participants when finalising the design
Do you want to study daily life or some kind of strange parallel life where a pesky gadget is incessantly seeking attention every 20 minutes? You need to thoroughly consider the burden of your study’s design on participants. For your sake as much as for theirs! Contributing factors will include the number of assessment days, the frequency of assessments (number per day), the length of time taken to respond to each prompt (i.e., length of self-report scales, ease of the response formats, plus any other actions required), and more aesthetic issues such as how easy it is to respond and how pleasant the audible prompt (alarm) is. Bear in mind that statistical power in MLM is not solely dependent on the number of individuals, but is reliant on the number of assessments as well as other factors (see Bolger et al., 2012 or here) so make sure you still have sufficient data to answer your research questions. Short valid scales should be preferred over longer ones.
TIP 3: Incorporate ‘usability’ functions within the device
You can further reduce burden and decrease the likelihood of missing data by building features into the electronic platform that will make things easier for the participant. A good choice of device will have a silent mode (with vibrate function). This should work slightly differently to a mobile phone in that participants would specify the amount of time to enter silent mode for. The device would then automatically revert back to the default loud mode afterwards. This will reduce the likelihood of the device getting forgotten about. You may also want to allow participants to postpone responses for a short period.
TIP 4: Consider a quasi-random prompt design
A relatively simple design may prompt participants six times per day at, say, 9.00am, 11.30am, 2.00pm, 4.30pm, 7.00pm, and 9.30pm. This would be a fixed time-based design. However, in this design, participants can anticipate prompts and there may be a systematic bias if prompts happen to coincide with regular aspects of daily routines (e.g., going home from work). An alternative design, the quasi-random time-based design divides the day into equal-sized chunks of time and randomly places a prompt within each time window. This approach necessitates an electronic platform as it needs a programmed algorithm, but gives you a representative sample of daily living and eliminates anticipatory effects. At the cutting edge of EMA design, a recent study used activity-triggered assessments of affective states (Kanning et al., 2015). Such automated ‘event-based’ designs, where real-time assessments are triggered by sensors is an exciting step forward.
TIP 5: Pilot, pilot, and pilot again!
For studies with clinical populations, you should fully consider the fact some participants may have difficulties interacting with the device. For example, in some neurological diseases it is likely some participants will have motor and/or visual impairments, and ethics boards and often funders will want to see evidence you have considered the practicalities in advance and involved your target population in decision-making. I have previously asked a few individuals to attend 1-to-1 meetings where I’ve presented the proposed protocol and asked them to engage with a demonstration version of the device. Ask for their thoughts. Is the device large enough? Is the font large enough? Are the items clearly understood? How loud should the prompt be to be heard? What is a reasonable burden (see Tip 2)? They may also raise issues you hadn’t thought of, but that you can address before the study begins.
You should also pilot the complete protocol yourself, at least once. How is the burden? Are any of the items ambiguous? Is the data collected as expected? Whist piloting, interact with the device strangely: press multiple buttons in an unusual order; try and go back through items after you’ve answered them. See if anything odd happens. You should be unashamedly hunting for bugs in the programming. It’s far better that you find them at this stage than a participant does! You should also accept that, on rare occasions, participants may not do as you ask them and might skip through self-report items in order to complete them quickly. So try to understand how a participant might do this if they actually wanted to so you can easily identify these response patterns within data cleaning. In my experience, such response behaviours are rare but sometimes quite difficult to spot in the dataset if you don’t know what you’re looking for.
That rounds off my 5 Tips for EMA. Any of you who have seen a conference presentation this year from any of the Aberdeen Health Psychology Group will have noticed we’ve been rounding them off with a shameless plug. This blog will be no different! Our group hopes to welcome you to Aberdeen next year for the joint European Health Psychology Society and BPS Division of Health Psychology Conference. I hope to see many interesting EMA studies!