Design and Planning, Data Collection, and Analysis and Interpretation
We designed multiple studies for the creation of mi.Symptoms to involve a process for iterative patient involvement. Table 10.3 summarizes our multistudy approach that culminates in a longitudinal feasibility study to assess the effectiveness of mi.Symptoms in improving physical symptoms, psychological symptoms, and quality of life.
We deliberately sampled a diverse and balanced group of patients related to age, gender, race, ethnicity, and language. In each study, at least 40% of our patients represented racial minorities and at least 20% included ethnic minorities. For age, we purposefully sampled across four generation groups (Millennials, Generation X, Boomers, and Silent Generation) (Pew Research Center). Generation group is a recommended metric for adequacy of a research sample on age because it is more informative of social context, secular trends in cohorts, and personal history than an arbitrary biological age category. The English version of the mi.Symptoms web application was also translated into Spanish, and some, but not all studies involved Spanish-speaking patients. Resultant samples also represented other groups that may not always be well represented in research with nearly half of participants reporting having trouble making ends meet financially, greater than 40% having a high school education or below, and greater than 45% having inadequate health literacy. Almost
TABLE 10.3
Overview of Studies - Case Study 2
Study |
Participants |
Study Time Frame |
Needs assessment (interviews) |
13 patients, 11 healthcare professionals |
2016 |
Usability assessment |
12 patients |
2017 |
Feasibility, cross-sectional pilot study |
168 patients |
2017-2018 |
Visualization comprehension study |
40 patients |
2019 |
Longitudinal feasibility study |
75 patients (ongoing) |
2020 |
a third of the sample did not have access to a computer at home and a quarter did not have access to the internet (Baik et al., 2019; Reading Turchioe, Grossman, Baik, et al., 2020). The subsequent paragraphs describe the purpose, sample, methods, results, and implications of each substudy in greater detail.
Prior to interacting with participants, we created an early prototype of mi Symptoms based on a systematic review' of symptom self-monitoring tools for patients with HF (Masterson Creber, Maurer, et al., 2016). Our first user study involved (in 2016) an initial needs assessment eliciting qualitative and quantitative (survey-based) feedback from both patients w'ith HF and healthcare professionals treating patients wdth HF (Grossman et al., 2018). We identified patient participants using purposeful sampling and recruited them from a cardiac inpatient unit and an ambulatory cardiac clinic at an urban academic medical center. The interviews involved having participants use mi.Symptoms and provide feedback on semistructured questions including the usefulness of mi.Symptoms, helpful/unhelpful features, and recommended changes. A qualitative analysis of the interview transcripts revealed challenges related to use of mi.Symptoms such as trouble understanding the PRO questions, lack of unstructured communication, and low technology literacy. Importantly, nearly half of the patients reported trouble understanding the PRO questions and reporting their symptoms. The findings of this initial study led to developing a series of design requirements, notably, that the design of the system should help patients understand the questions and that the design should educate the patients related to how their symptoms are linked with their disease. We revised mi.Symptoms based on these preliminary findings.
Next, we completed a usability assessment with a group of 12 new' patients, purposefully sampled from a cardiac inpatient unit. During the usability study, participants executed tasks using mi.Symptoms, such as answering survey questions and interpreting survey results. Participants also provided preferences regarding different visualization options for presenting PROs and completed the eight-item Standardized User Experience Percentile Rank Questionnaire (SUPR-Q). The SUPR-Q assesses usability, credibility, loyalty, appearance, and overall quality (Sauro, 2015; Schnall et al., 2018). This set of patients rated the revised version highly across all constructs in the SUPR-Q (all >0.9 out of 1). However, we discovered that half of the patients failed to interpret graphs of symptoms and others required multiple attempts to correctly interpret graphical information (Grossman et al., 2018). This finding led us to the create new PRO visualization options and to add an assessment of participant graph literacy in the future 2019 visualization comprehension study (Reading Turchioe, Grossman, Myers, et al., 2020).
Our2017-2018 study involved a larger scale feasibility assessment of mi.Symptoms. We recruited 168 patients from an inpatient cardiac unit and an ambulatory cardiac clinic. The feasibility study also involved a Spanish version of mi.Symptoms evaluated by patients with a preferred language of Spanish. In the study, we assessed the correlation of the symptoms included in mi.Symptoms w'ith a validated measure of health status. In addition, participants provided feedback on usefulness and ease of use using a modified Health Information Technology Usability Evaluation Scale (Health-ITUES) (Schnall et al., 2018). We found that it was feasible for patients with HF to complete the PRO questions in mi.Symptoms (i.e. there was no missing data) and participants rated the tool as both useful and easy to use. Furthermore, there were no differences in perceived usefulness or ease of use based on age, suggesting that the application was also suitable for older adults (Baik et ah, 2019; Reading Turchioe, Grossman. Baik, et ah, 2020).
In the 2017 usability study, we found that participants struggled to accurately interpret graphical presentations of symptoms (Grossman et ah, 2018). To address this issue, we completed a visualization comprehension study (2019) with another group of 40 participants with HF. We purposively sampled 40 hospitalized patients with HF. In this study, we assessed participant performance interpreting PRO result information across four visualization conditions: text-only; text plus visual analogy; text plus number line; and text plus line graph (Reading Turchioe, Grossman, Myers, et ah, 2020). The visual analogy compared patients’ functioning using a colored gauge (signifying low, medium, and high) as shown in the bottom left of Figure 10.2. The visualization condition that had the highest comprehension was the visual analogy condition (83% correct). We also found that participants scored poorly on validated assessment of graph literacy, indicating that graphical visualizations, more specifically line graphs, may not support patient comprehension as much as other visualizations for this population. We also found that participants with worse cognition, lower education, and fewer financial resources had poorer comprehension of the visualizations presented, suggesting that failure to create appropriate graphical representations could have a particularly negative impact on already marginalized patients.
We have used the results of a series of usability studies with diverse groups of patients to design a validated PRO and shared decision-making application that was found useful and usable by participants. All studies involved purposive sampling across age and race, and the large-scale feasibility assessment included both English and Spanish-speaking participants. We also created a visual analogy for presenting PRO-based health information to patients that was well understood.

FIGURE 10.2 Depiction of the development of the mi.Symptoms interface across relevant studies.
Figure Ю.2 illustrates the development of the mi.Symptoms interface over the previously described studies. We are currently enrolling patients in a longitudinal feasibility study to assess the feasibility of use over time with those participants in both an inpatient and outpatient setting. The next step is to conduct a randomized controlled trial to determine the effectiveness of mi.Symptoms in improving symptom management and quality of life among patients with HF as compared to usual care.
In addition to incorporating different people in the patient work system (e.g. patients, healthcare professionals) throughout the project life cycle, we also made study design considerations based on the patient’s environment, assessment of patient abilities, and the sociocultural organizational contexts.
We partly attribute the ability to recruit a diverse sample of patients to our choice of study location/recruitment. Previous studies have demonstrated that those in underserved groups are less likely to reach out and volunteer for research studies (Ford et al., 2008). In each case, we recruited participants from currently hospitalized patients from urban academic medical centers, in addition to ambulatory patients in some cases. Recruitment of hospitalized patients ensured the participant pool more closely matches those w'ith symptomatic HF, not just those who were getting out-patient follow-up for HF. Recruiting from within an in-patient setting also reduces barriers to participation, such as requesting time off of work and organizing transportation. In addition, in urban environments space may be a significant barrier to study participation. Often exam rooms are not available for conducting research studies. Consequently, there could be violations of patient privacy in crowded waiting rooms. Furthermore, if patients are paying for parking per hour in a city like New' York, they are less likely to take the extra time to participate in a study. This approach to recruitment further allowed us to provide the technology for the study without having to purchase a device for every participant, which can be a cost-saving measure in the early phases of design.
Although the in-patient recruitment approach worked well for our purposes, it may not work well for all patient populations, for example, for diseases where hospitalization is not common or when a study requires a healthy population. Additionally, some patients may find it burdensome to be approached and asked to participate in a research study while hospitalized. There are also patient populations for which it may not be safe or feasible to participate. For example, our studies excluded patients with severe cognitive impairment and unstable psychiatric illnesses. These exclusion criteria were identified first through the patients’ electronic medical records and second by the patients’ healthcare providers, who we gained approval from prior to approaching patients to participate. In addition, we acknowledge the limitation that there is a loss of realism reporting symptoms while admitted to the hospital because patients do not face many of the complexities of their day-to-day lives outside of the hospital. On the other hand, we also found that it w'as a teachable moment, and patients who may not have considered there to be value in symptom monitoring now' understood the consequences of letting the symptoms worsen into an acute exacerbation. To address the limitations of in-patient recruitment, we are completing a longitudinal feasibility study by recruiting from an outpatient setting. In this case, we also brought the study to the participants, completing initial recruitment in an outpatient heart failure clinic. Both approaches (inpatient vs. outpatient recruitment), however, have the limitation of missing patients who are not currently under the care of a provider at an academic medical center for their HF. In that way, we are likely missing the most vulnerable patients in need of intervention.
To understand person-related differences in perceptions and performance, we collected data related to demographics, cognitive status, and abilities at each stage of research. In our usability and feasibility studies, we collected demographic information as well as other SDOH-related data, such as socioeconomic status and insurance status. In each study, we also assessed the participants’ health literacy. Health literacy dictates patient’s and caregivers’ ability to: find information and services; communicate their needs and preferences and respond to information and services; process the meaning and usefulness of the information and services; understand the choices, consequences, and context of the information and services; and decide which information and services match their needs and preferences so they can act (Centers for Disease Control and Prevention, 2019). Our iterative, multistudy approach allowed us to adapt later studies to incorporate findings from previous studies. For example, in our early usability study, we also found that patients had difficulty interpreting graphs, therefore in the later usability study where we compared visualizations, we added further assessments related to cognitive function and graph literacy. Our team in earlier studies also noticed that while participants with severe cognitive impairments had already been excluded (e.g. dementia) some participants did have trouble with things such as motivation and memory. Therefore, we included the Montreal Cognitive Assessment (MoCA) to be able to quantify cognitive impairment and evaluate whether that was driving differences in comprehension of symptom visualizations (Nasreddine et al., 2005; Reading Turchioe, Grossman, Myers, et al., 2020; Smith et al., 2007).
We did find that some patients started to fatigue as they completed the survey instruments. The most challenging survey for participants was the four-item graph literacy questionnaire. As a result of the response to the graph literacy survey, we conducted this last with participants. If participants said they no longer wanted to answer the questions, at that point the study was almost over and thus minimal data were lost. Had we included the measurement of graph literacy early in the survey instruments, we may not have had such strong study completion rates. Ultimately, the inclusion of graph literacy and cognition were critical for understanding patients’ ability to utilize mi.Symptoms and especially for identifying which patient groups require additional support. Although the survey completion required some perseverance, it was well worth the effort in our use case.
We also utilized SDoH surveys and assessment of patient abilities to complete post-hoc sub-group analyses. Our subgroup analyses helped us identify important findings including high ratings of usability across age groups, and that despite our best efforts, some underserved groups still performed poorer than others in the final usability assessment. For instance, when we evaluated perceived ease of use and usability between English and Spanish speakers, there were statistically significant lower scores for both perceived ease of use and usefulness among Spanish speakers compared to English speakers. There were, however, no differences detected in perceived ease of use and usefulness by age.
TABLE 10.4
Dissemination Summary - Case Study 2
Outlet |
Description |
Citation |
Health informatics and mHealth |
Review of current tools for collecting PROs in an HF patient population |
Masterson Creber, Mauerer, et al. (2016) |
Initial needs assessment interviews and usability evaluation of mi.Symptoms prototype |
Grossman et al. (2018) |
|
Comprehension assessment of different visualizations of PROs in mi.Symptoms |
Reading Turchioe, Grossman, Myers, et al. (2020) |
|
Heart failure journal focused |
Patient activation among hospitalized patients with heart failure |
Masterson Creber et al. (2017) |
Evaluation of PRO assessment questionnaire in relation to heart failure outcomes |
Baik ct al. (2019) |
|
Gerontechnologies for patients with heart failure |
Masterson Creber, Hickey et al. (2016) |
|
Population-focused (older adults) |
Usability and feasibility assessment of mi.Symptoms for collection PROs from HF patients |
Reading Turchioe, Grossman, Baik, et al. (2020) |
Dissemination and Implementation
Similar to case study 1, Table 10.4 provides a summary of the dissemination efforts related to the evaluation and testing of mi.Symptoms. We balanced publication across technology (informatics, mHealth), clinical domain/environment (heart failure), and patient population groups of interest (older adults). This strategy allowed us to disseminate findings to professionals who care for patients with HF, practitioners who create consumer health information technologies, and those who serve relevant populations (older adults, cardiology patients).