Javascript required
Skip to content Skip to sidebar Skip to footer

Presentation of Recommender Systems for Health Informatics Stateoftheart and Future Perspectives

Review

  • Robin De Croon 1 , PhD ;
  • Leen Van Houdt one , MSc ;
  • Nyi Nyi Htun ane , PhD ;
  • Gregor Štiglic ii , PhD ;
  • Vero Vanden Abeele 1 , PhD ;
  • Katrien Verbert one , PhD

1Department of Computer Scientific discipline, KU Leuven, Leuven, Belgium

2Faculty of Health Sciences, University of Maribor, Maribor, Slovenia

Corresponding Author:

Robin De Croon, PhD

Department of Estimator Science

KU Leuven

Celestijnenlaan 200A

Leuven, 3001

Belgium

Telephone: 32 16373976

Email: robin.decroon@kuleuven.be


Background: Health recommender systems (HRSs) offering the potential to motivate and appoint users to change their behavior past sharing amend choices and actionable knowledge based on observed user behavior.

Objective: We aim to review HRSs targeting nonmedical professionals (laypersons) to better empathize the current state of the fine art and identify both the main trends and the gaps with respect to current implementations.

Methods: We conducted a systematic literature review according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines and synthesized the results. A total of 73 published studies that reported both an implementation and evaluation of an HRS targeted to laypersons were included and analyzed in this review.

Results: Recommended items were classified into 4 major categories: lifestyle, nutrition, general health intendance information, and specific health conditions. The majority of HRSs utilize hybrid recommendation algorithms. Evaluations of HRSs vary greatly; half of the studies only evaluated the algorithm with various metrics, whereas others performed full-scale randomized controlled trials or conducted in-the-wild studies to evaluate the impact of HRSs, thereby showing that the field is slowly maturing. On the basis of our review, we derived five reporting guidelines that can serve as a reference frame for future HRS studies. HRS studies should clarify who the target user is and to whom the recommendations apply, what is recommended and how the recommendations are presented to the user, where the data set can exist found, what algorithms were used to calculate the recommendations, and what evaluation protocol was used.

Conclusions: In that location is significant opportunity for an HRS to inform and guide health deportment. Through this review, we promote the word of ways to augment HRS inquiry past recommending a reference frame with five design guidelines.

J Med Internet Res 2021;23(6):e18035

doi:10.2196/18035

Keywords



Inquiry Goals

Current wellness challenges are often related to our mod mode of living. Loftier blood pressure, loftier glucose levels, and physical inactivity are all linked to a modernistic lifestyle characterized past sedentary living, chronic stress, or a loftier intake of energy-dense foods and recreational drugs []. Moreover, people commonly make poor decisions related to their wellness for distinct reasons, for instance, busy lifestyles, abundant options, and a lack of knowledge []. Practically, all modern lifestyle wellness risks are directly affected by people'southward health decisions [], such equally an unhealthy diet or physical inactivity, which can contribute up to three-fourth of all health care costs in the United states []. About risks can be minimized, prevented, or sometimes even reversed with small lifestyle changes. Eating healthily, increasing daily activities, and knowing where to detect validated health information could atomic number 82 to improved health status [].

Health recommender systems (HRSs) offer the potential to motivate and appoint users to change their behavior [] and provide people with better choices and actionable knowledge based on observed behavior [-]. The overall objective of the HRS is to empower people to monitor and amend their health through applied science-assisted, personalized recommendations. As one approach of modern health care is to involve patients in the cocreation of their ain health, rather than just leaving information technology in the hands of medical experts [], we limit the scope of this paper to HRSs that focus on laypersons, for example, nonhealth intendance professionals. These HRSs are different from clinical decision support systems that provide recommendations for health care professionals. However, laypersons also need to understand the rationale of recommendations, every bit echoed past many researchers and practitioners []. This paper also studies the role of a graphical user interface. To guide this study, nosotros define our research questions (RQs) as follows:

RQ1: What are the main applications of the recent HRS, and what do these HRSs recommend?

RQ2: Which recommender techniques are beingness used across different HRSs?

RQ3: How are the HRSs evaluated, and are end users involved in their evaluation?

RQ4: Is a graphical user interface designed, and how is it used to communicate the recommended items to the user?

Recommender Systems and Techniques

Recommender techniques are traditionally divided into different categories [,] and are discussed in several country-of-the-fine art surveys []. Collaborative filtering is the virtually used and mature technique that compares the deportment of multiple users to generate personalized suggestions. An example of this technique can typically be found on e-commerce sites, such as "Customers who bought this item likewise bought..." Content-based filtering is some other technique that recommends items that are similar to other items preferred by the specific user. They rely on the characteristics of the objects themselves and are probable to be highly relevant to a user'due south interests. This makes content-based filtering especially valuable for awarding domains with large libraries of a single type of content, such as MedlinePlus' curated consumer health information []. Knowledge-based filtering is another technique that incorporates cognition past logic inferences. This type of filtering uses explicit cognition about an item, user preferences, and other recommendation criteria. Still, knowledge conquering tin likewise be dynamic and relies on user feedback. For case, a camera recommender system might ask users about their preferences, fixed or child-bearing lenses, and budget and then suggest a relevant photographic camera. Hybrid recommender systems combine multiple filtering techniques to increment the accurateness of recommendation systems. For example, the companies y'all may want to follow characteristic in LinkedIn uses both content and collaborative filtering data []: collaborative filtering information is included to make up one's mind whether a company is similar to the ones a user already followed, whereas content information ensures whether the industry or location matches the interests of the user. Finally, recommender techniques are often augmented with additional methods to incorporate contextual information in the recommendation procedure [], including recommendations via contextual prefiltering, contextual postfiltering, and contextual modeling [].

HRSs for Laypersons

Ricci et al [] define recommender systems as:

Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user [,,]. The suggestions relate to diverse controlling processes, such equally what items to buy, what music to listen to, or what online news to read.

In this paper, we analyze how recommender systems accept been used in health applications, with a focus on laypersons. Wiesner and Pfeifer [] broadly define an HRS as:

a specialization of an RS [recommender system] as defined by Ricci et al []. In the context of an HRS, a recommendable item of involvement is a piece of nonconfidential, scientifically proven or at to the lowest degree generally accepted medical information.

Researchers have sought to consolidate the vast body of literature on HRSs past publishing several surveys, literature reviews, and state-of-the-art overviews. provides an overview of existing summative studies on HRSs that identify existing research and shows the number of studies included, the method used to analyze the studies, the telescopic of the paper, and their contribution.

Table 1. An overview of the existing health recommender system overview papers.
Review Papers, n Method Scope Contribution
Sezgin and Özkan (2013) [22] 8 Systematic review Provides an overview of the literature in 2013 Identifying challenges (eg, cyber-attacks, difficult integration, and data mining can cause upstanding issues) and opportunities (eg, integration with personal health data, gathering user preferences, and increased consistency)
Calero Valdez et al (2016) [23] 17 Survey Stresses the importance of the interface and HCIa of an HRSb Providing a framework to comprise domain agreement, evaluation, and specific methodology into the development process
Kamran and Javed (2015) [24] 7 Systematic review Provides an overview of existing recommender systems with more focus on health care systems Proposing a hybrid HRS
Afolabi et al (2015) [25] 22 Systematic review Research empirical results and practical implementations of HRSs Presenting a novel proposal for the integration of a recommender arrangement into smart home care
Ferretto et al (2017) [26] 8 Systematic review Identifies and analyzes HRSs available in mobile apps Identifying HRSs that exercise not have many mobile wellness care apps
Hors-Fraile et al 2018 [27] 19 Systematic review Identifies, categorizes, and analyzes existing knowledge on the employ of HRSs for patient interventions Proposing a multidisciplinary taxonomy, including integration with electronic health records and the incorporation of wellness promotion theoretical factors and beliefs change theories
Schäfer et al (2017) [28] 24 Survey Discusses HRSs to find personalized, complex medical interventions or support users with preventive health care measures Identifying challenges subdivided into patient and user challenges, recommender challenges, and evaluation challenges
Sadasivam et al (2016) [29] 15 Systematic review Research limitations of current CTHCc systems Identifying challenges of incorporating recommender systems into CTHC. Proposing a future research agenda for CTHC systems
Wiesner and Pfeifer (2014) [21] Not reported Survey Introduces HRSs and explains their usefulness to personal health record systems Outlining an evaluation arroyo and discussing challenges and open bug
Cappella et al (2015) [xxx] Not reported Survey Explores approaches to the development of a recommendation system for athenaeum of public wellness messages Reflecting on theory development and applications

aHCI: human-computer interaction.

bHRS: health recommender arrangement.

cCTHC: computer-tailored health advice.

As tin be seen in , the scope of the existing literature varies profoundly. For example, Ferretto et al [] focused solely on HRSs in mobile apps. A total of 3 review studies focused specifically on the patient side of the HRS: (1) Calero Valdez et al [] analyzed the existing literature from a human being-computer interaction perspective and stressed the importance of a good HRS graphical user interface; (2) Schäfer et al [] focused on tailoring recommendations to end users based on wellness context, history, and goals; and (3) Hors-Fraile et al [] focused on the individual user by analyzing how HRSs can target beliefs change strategies. The most extensive study was conducted by Sadasivam et al []. In their study, nearly HRSs used cognition-based recommender techniques, which might limit individual relevance and the ability to adapt in real time. All the same, they also reported that the HRS has the opportunity to utilize a about-infinite number of variables, which enables tailoring beyond designer-written rules based on data. The most important challenges reported were the cold get-go [] where limited data are available at the start of the intervention, limited sample size, adherence, and potential unintended consequences []. Finally, we observed that these existing summative studies were often restrictive in their final set up of papers.

Our contributions to the community are four-fold. First, we clarify a broader set of research studies to gain insights into the current land of the art. We do not limit the included studies to specific devices or patients in a clinical setting but focus on laypersons in full general. 2d, through a comprehensive analysis, we aim to place the applications of recent HRS apps and gain insights into actionable knowledge that HRSs can provide to users (RQ1), to place which recommender techniques have been used successfully in the domain (RQ2), how HRSs take been evaluated (RQ3), and the role of the user interface in communicating recommendations to users (RQ4). Third, based on our all-encompassing literature review, nosotros derive a reference frame with 5 reporting guidelines for time to come layperson HRS enquiry. Finally, nosotros collected and coded a unique data set of 73 papers, which is publicly available in [-,,-] for other researchers.


Search Strategy

This study was conducted according to the key steps required for systematic reviews according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines []. A literature search was conducted using the ACM Digital Library (n=2023), IEEExplore (n=277), and PubMed (n=93) databases. As mentioned earlier, in this systematic review we focused solely on HRSs with a focus on laypersons. Even so, many types of systems, algorithms, and devices tin be considered every bit a HRS. For example, push notifications in a mobile health app or health tips prompted by web services can likewise be considered as health-related recommendations. To outline the telescopic, we limited the search terms to include a recommender or recommendation, every bit reported by the authors. The search keywords were as follows, using an inclusive OR: (recommender OR recommendation systems OR recommendation organization) AND (health OR healthcare OR patient OR patients).

In addition, a backward search was performed by examining the bibliographies of the survey and review papers discussed in the Introduction section and the reference list of included studies to identify whatever boosted studies. A forward search was performed to search for articles that cited the work summarized in .

Study Inclusion and Exclusion Criteria

As existing work did non include many studies () and focused on a specific medical domain or device, such every bit mobile phones, this literature review used nonrestrictive inclusion criteria. Studies that met all the following criteria were included in the review: described an HRS whose chief focus was to improve health (eg, food recommenders solely based on user preferences [] were not included); targeted laypersons (eg, activity recommendations targeted on a proxy user such every bit a omnibus [] were not included); implemented the HRS (eg, papers describing an HRS concept are not included); reported an evaluation, either web-based or offline evaluation; peer-reviewed and published papers; published in English.

Papers were excluded when one of the following was true: the recommendations of HRSs were unclear; the full text was unavailable; or a newer version was already included.

Finally, when multiple papers described the same HRS, simply the latest, relevant full newspaper was included.

Classification

To address our RQs, all included studies were coded for five distinct coding categories.

Written report Details

To contextualize new insights, the publication yr and publication venue were analyzed.

Recommended Items

HRSs are used across different health domains. To provide details on what is recommended, all papers were coded according to their respective wellness domains. To non limit the scope of potential items, no predefined coding table was used. Instead, all papers were initially coded by the outset author. These resulting recommendations were then clustered together in collaboration with the coauthors into four categories, as shown in .

Recommender Techniques

This category encodes the recommender techniques that were used: collaborative filtering [], content-based filtering [], knowledge-based filtering [], and their hybridizations []. Some studies did not specify any algorithmic details or compared multiple techniques. Finally, when an HRS used contextual information, information technology was coded whether they used pre- or postfiltering or contextual modeling.

Evaluation Approach

This category encodes which evaluation protocols were used to mensurate the result of HRSs. We coded whether the HRSs were evaluated through offline evaluations (no users involved), surveys, heuristic feedback from practiced users, controlled user studies, deployments in the wild, and randomized controlled trials (RCTs). We also coded sample size and study duration and whether ethical approval was gathered and needed.

Interface and Transparency

Recommender systems are oftentimes perceived equally a blackness box, equally the rationale for recommendations is often not explained to end users. Contempo inquiry increasingly focuses on providing transparency to the inner logic of the arrangement []. We encoded whether explanations are provided and, in this instance, how such transparency is supported in the user interface. Furthermore, nosotros also classified whether the user interface was designed for a specific platform, categorized every bit mobile, web, or other.

Information Extraction, Intercoder Reliability, and Quality Assessment

The required information for all included technologies and studies was coded by the first author using a information extraction form. Owing to the large variety of report designs, the included studies were assessed for quality (detailed scores given in ) using the tool by Hawker et al []. Using this tool, the abstract and title, introduction and aims, method and data, sample size (if applicable), information analysis, ethics and bias, results, transferability or generalizability, and implications and usefulness were allocated a score betwixt 1 and 4, with higher scoring studies indicating higher quality. A random selection with 14% (10/73) of the papers was listed in a spreadsheet and coded by a 2d researcher post-obit the divers coding categories and subcategories. The decisions made past the 2nd researcher were compared with the outset. With the recommended items (), in that location was only i small disagreement between physical action and leisure activeness [], just all other recommended items were rated exactly the same; the recommender techniques had a Cohen κ value of 0.71 (P<.001) and the evaluation approach scored a Cohen κ value of 0.81 (P<.001). There was moderate agreement (Cohen κ=0.568; P<.001) between the researchers concerning the quality of the papers. The interfaces used were in perfect understanding. Finally, the coding data are available in .


Study Details

The literature in 3 databases yielded 2340 studies, of which merely 23 were duplicates and 53 were full proceedings, leaving 2324 studies to be screened for eligibility. A full of 2161 studies were excluded upon title or abstract screening because they were unrelated to wellness or targeted at medical professionals or because the papers did not report an evaluation. Thus, the remaining 163 full-text studies were assessed for eligibility. After the removal of ninety studies that failed the inclusion criteria or met the exclusion criteria, 73 published studies remained. The search procedure is illustrated in .

Figure 1. Flow diagram according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. EC: exclusion criteria; IC: inclusion criteria.
View this figure

All included papers were published in 2009 or subsequently, following an upwards trend of increased popularity. The publication venues of HRSs are diverse. Just the PervasiveHealth [-], RecSys [,,], and WI-IAT [-] conferences published 3 papers each that were included in this written report. The Journal of Medical Internet Research was the only periodical that occurred more than frequently in our information set; 5 papers were published past Journal of Medical Net Research [-]. The papers were first rated using Hawker tool []. Owing to a large number of offline evaluations, we did not include the sample score to enable a comparison between all included studies. The papers received an boilerplate score of 24.32 (SD 4.55, max 32; data set presented in ). Most studies scored very poor on reporting ethics and potential biases, as illustrated in . All the same, there is an upward trend over the years in more acceptable reporting of ethical issues and potential biases. The authors also limited themselves to their specific case studies and did not make any recommendations for policy (last box plot is presented in ). All 73 studies reported the apply of unlike data sets. Although all recommended items were health related, but Asthana et al [] explicitly mentioned using electronic health record information. Simply 14% (10/73) [,-] explicitly reported that they addressed the common cold-start problem.

Figure ii. Distribution of the quality assessment using Bell-ringer tool.
View this figure

Recommended Items

Overview

Near HRSs operated in different domains and thus recommended different items. In this study, iv nonmutually exclusive categories of recommended items were identified: lifestyle 33% (24/73), diet 36% (26/73), full general wellness information 32% (23/73), and specific health condition–related recommendations 12% (9/73). The only pregnant trend we constitute is the increasing popularity of diet communication. shows the distribution of these recommended items.

Lifestyle

Many HRSs, 33% (24/73) of the included studies, advise lifestyle-related items, but they differ greatly in their exact recommendations. Concrete activity is often recommended. Physical activities are often personalized co-ordinate to personal interests [] or the context of the user []. In improver to concrete activities, Kumar et al [] recommend eating, shopping, and socializing activities. One report analyzes the data and measurements to be tracked for an individual and then recommends the appropriate wearable technologies to stimulate proactive health []. A full of vii studies [,,,,-] more than directly endeavor to convince users to alter their behavior by recommending them to change, or alter their behavior: for example, Rabbi et al [] learn "a user'south physical activeness and dietary behavior and strategically suggests changes to those behaviors for a healthier lifestyle." In another example, both Marlin et al [] and Sadasivam et al [] motivate users to stop smoking by providing them with tailored messages, such equally "Keep in mind that cravings are temporary and will pass." Messages could reflect the theoretical determinants of quitting, such as positive issue expectations and self-efficacy enhancing small goals [].

Diet

The influence of food on wellness is as well clear from the large subset of HRSs dealing with nutrition recommendations. A mere 36% (26/73) of the studies recommend nutrition-related data, such as recipes [], meal plans [], restaurants [], or even help with choosing healthy items from a eatery menu []. Wayman and Madhvanath [] provide automated, personalized, and goal-driven dietary guidance to users based on grocery receipt data. Trattner and Elsweiler [] use postfiltering to focus on healthy recipes only and extended them with nutrition communication, whereas Ge et al [] require users to first enter their preferences for better recommendations. Moreover, Gutiérrez et al [] propose healthier alternatives through augmented reality when the users are shopping. A total of seven studies specifically recommend healthy recipes [,,,,-]. Most HRSs consider the health condition of the user, such as the DIETOS system []. Other systems recommend recipes that are synthesized based on existing recipes and recommend new recipes [], assistance parents in making appropriate food for their toddlers [], or help users to choose allergy-safe recipes [].

General Health Data

According to 32% (23/73) of the included studies, providing access to trustworthy health care information is another mutual objective. A total of 5 studies focused on personalized, trustworthy information per se [,,-], whereas 5 others focused on guiding users through health intendance forums [,-]. In full, 3 studies [,,] provided personalized access to full general health information. For instance, Sanchez Bocanegra et al [] targeted health-related videos and augmented them with trustworthy information from the United States National Library of Medicine (MedlinePlus) []. A total of 3 studies [,,] related to health intendance forums focused on finding relevant threads. Cho et al [] congenital "an democratic agent that automatically responds to an unresolved user query by posting an automatic response containing links to threads discussing similar medical problems." Nonetheless, 2 studies [,] helped patients to detect like patients. Jiang and Yang [] investigated approaches for measuring user similarity in web-based health social websites, and Lima-Medina et al [] built a virtual surround that facilitates contact among patients with cardiovascular problems. Both studies aim to help users seek informational and emotional support in a more efficient way. A full of iv studies [,-] helped patients to detect appropriate doctors for a specific health problem, and 4 other studies [,-] focused on finding nearby hospitals. A total of ii studies [,] simply focused on the clinical preferences of the patients, whereas Krishnan et al [] "provide health care recommendations that include Blood Donor recommendations and Hospital Specialization." Finally, Tabrizi et al [] considered patient satisfaction as the primary feature of recommending hospitals to the user.

Specific Health Conditions

The last group of studies (9/73, 12%) focused on specific health conditions. However, the recommended items vary significantly. Torrent-Fontbona and Lopez Ibanez [] have built a knowledge-based recommender system to help diabetes patients in numerous cases, such as the estimated carbohydrate intake and by and hereafter physical activity. Pustozerov et al [] attempt to "reduce the carbohydrate content of the desired meal by reducing the amount of carbohydrate-rich products or by suggesting variants of products for replacement." Li and Kong [] provided diabetes-related information, such as the need for a low-sodium luncheon, targeted on American Indians through a mobile app. Other health conditions supported by recommender systems include low and anxiety [], mental disorders [], and stress [,,,]. Both the mental disorder [] and the low and anxiety [] HRSs recommend mobile apps. For example, the app MoveMe suggests exercises tailored to the user'southward mood. The HRS to alleviate stress includes recommending books to read [] and meditative audios [].

Recommender Techniques

Overview

The recommender techniques used varied greatly. shows the distributions of these recommender techniques.

Tabular array 2. Overview of the unlike recommender techniques used in the studies.
Main techniquea Written report Total studies, northward (%)
Collaborative filtering [59,69,76] 3 (four)
Content-based filtering [15,32,54,63,72,86,87] 7 (x)
Knowledge-based filtering [9,38,44,50,57,64,66,68,79,81,82,84,88-91] xvi (22)
Hybrid [seven,29,34,36,37,39-41,43,46-48,53,55,56,61,65,67,69,70,73,74,77,78,80,85,92-96,111] 32 (44)
Context-based techniques [33,35,58,97] iv (5)
Not specified [45,83,98] 3 (4)
Comparison between techniques [8,49,52,lx,62,71,75,99] eight (11)

aThe papers are classified based on how the authors reported their techniques.

Recommender Techniques in Practice

The majority of HRSs (49/73, 67%) rely on knowledge-based techniques, either straight (17/49, 35%) or in a hybrid approach (32/49, 65%). Knowledge-based techniques are often used to comprise additional information of patients into the recommendation process [] and have been shown to better the quality of recommendations while alleviating other drawbacks such every bit cold-start and sparsity issues []. Some studies employ straightforward approaches, such equally if-else reasoning based on domain knowledge [,,,,,,]. Other studies utilise more complex algorithms such as particle swarm optimization [], fuzzy logic [], or reinforcement algorithms [,].

In full, 32 studies reported using a combination of recommender techniques and are classified every bit hybrid recommender systems. Different knowledge-based techniques are often combined. For example, Ali et al [] used a combination of rule-based reasoning, case-based reasoning, and preference-based reasoning to recommend personalized physical activities according to the user'due south specific needs and personal interests. Asthana et al [] combined the knowledge of a conclusion tree and demographic data to identify the health weather condition. When health conditions are known, the organisation knows which measurements need to be monitored. A full of seven studies used a content-based technique to recommend educational content [,,], activities [,], reading materials [], or nutritional advice [].

Although collaborative filtering is a popular technique [], it is not used frequently in the HRS domain. Marlin et al [] used collaborative filtering to personalize future smoking cessation messages based on explicit feedback on past letters. This arroyo is used more often in combination with other techniques. A total of 2 studies [,] combined content-based techniques with collaborative filtering. Esteban et al [], for case, switched betwixt content-based and collaborative approaches. The erstwhile approach is used for new physiotherapy exercises and the latter, when a new patient is registered or when previous recommendations to a patient are updated.

Context-Based Recommender Techniques

From an HRS perspective, context is described as an aggregate of various information that describes the setting in which an HRS is deployed, such as the location, the electric current activeness, and the available time of the user. A full of five studies utilise contextual data to improve their recommendations just use a different technique; a prefilter uses contextual information to select or construct the most relevant data for generating recommendations. For example, in Narducci et al [], the prepare of potentially like patients was restricted to consultation requests in a specific medical area. Rist et al [] practical a rule-based contextual prefiltering approach [] to filter out inadequate recommendations, for example, "if it is nighttime outside, all outdoor activities, such as 'take a walk,' are filtered out" [] before they are fed to the recommendation algorithm. However, a postfilter removes the recommended items afterward they are generated, such as filtering outdoor activities while it is raining. Casino et al [] used a postfiltering technique past running the recommended items through a real-time constraint checker. Finally, contextual modeling, which was used by 2 studies [,], uses contextual information directly in the recommendation function every bit an explicit predictor of a user's rating for an item [].

Location, agenda, and weather are examples of contextual data used by Lin et al [] to promote the adoption of a good for you and active lifestyle. Cerón-Rios et al [] used a decision tree to analyze user needs, health data, interests, fourth dimension, location, and lifestyle to promote healthy habits. Casino et al [] gathered contextual information through smart city sensor information to recommend healthier routes. Similarly, contextual information was acquired by Rist et al [] using sensors embedded in the user's environment.

Comparisons

A total of eight papers compared different recommender techniques to find the almost optimal algorithm for a specific information set, end users, domain, and goal. Halder et al [] used two well-known health forum information sets (PatientsLikeMe [] and HealthBoards []) to compare vii recommender techniques (among collaborative filtering and content-based filtering) and found that a hybrid approach scored best []. Another example is the study by Narducci et al [], who compared 4 recommendation algorithms: cosine similarity as a baseline, collaborative filtering, their ain HealthNet algorithm, and a hybrid of HealthNet and cosine similarity. They ended that a prefiltering technique for like patients in a specific medical area can drastically improve the recommendation accuracy []. The average and SD of the resulting ratings of the ii collaborative techniques are compared with random recommendations by Li et al []. They evidence that a hybrid approach of a collaborative filter augmented with the calculated health level of the user performs better. In their nutrition-based repast recommender system, Yang et al [] used item-wise and pairwise image comparisons in a two-step procedure. In determination, the 8 studies showed that recommendations can be improved when the benefits of multiple recommender techniques are combined in a hybrid solution [] or contextual filters are applied [].

Evaluation Approach

Overview

HRSs tin can exist evaluated in multiple ways. In this study, we found ii categories of HRS evaluations: (1) offline evaluations that apply computational approaches to evaluate the HRS and (ii) evaluations in which an end user is involved. Some studies used both, as shown in .

Offline Evaluations

Of the total studies, 47% (34/73) practice not involve users directly in their method of evaluation. The evaluation metrics as well vary greatly, as many distinct metrics are reported in the included papers (). Precision 53% (18/34), accuracy 38% (13/34), performance 35% (12/34), and recall 32% (xi/34) were the most commonly used offline evaluation metrics. Call back has been used significantly more in recent papers, whereas accuracy besides follows an upward trend. Moreover, operation was defined differently across studies. Torrent-Fontbona and Lopez Ibanez [] compared the "amount of time in the glycaemic target range by reducing the time below the target" as operation. Cho et al [] compared the precision and retrieve to report the performance. Clarke et al [] calculated their own reward function to compare different approaches, and Lin et al [] measured system functioning as the number of messages sent in their in the wild report. Finally, Marlin et al [] tested the predictive operation using a triple cross-validation procedure.

Other popular offline evaluation metrics are accuracy-related measurements, such every bit mean absolute (percentage) error, eighteen% (half dozen/34); normalized discounted cumulative gain (nDCG), eighteen% (6/34); F 1 score, 15% (five/34); and root mean square mistake, 15% (5/34). The other metrics were measured inconsistently. For example, Casino et al [] reported that they measure out robustness but practise not outline what they mensurate as robustness. Withal, they measured the mean absolute error. Torrent-Fontbona and Lopez Ibanez [] defined robustness as the capability of the system to handle missing values. Effectiveness is also measured with different parameters, such equally its ability to take the right classification decisions [] or in terms of primal opinion leaders' identification []. Finally, Li and Zaman [] measured trust with a proxy: "evaluate the trustworthiness of a item user in a health care social network based on factors such as role and reputation of the user in the social community" [].

User Evaluations
Overview

Of the full papers, 53% (39/73) included participants in their HRS evaluation, with an boilerplate sample size of 59 (SD 84) participants (excluding the outlier of 8057 participants, as recruited in the written report by Cheung et al []). On average, studies ran for more than two months (68, SD 56 days) and included all historic period ranges. At that place is a trend of increasing sample size and study duration over the years. Yet, but 17 studies reported the study duration; therefore, these trends were not significant. Surveys (12/39, 31%), user studies (10/39, 26%), and deployments in the wild (10/39, 26%) were the most used user evaluations. Only 6 studies used an RCT to evaluate their HRS. Finally, although all the included studies focused on HRSs and were dealing with sensitive data, only 12% (ix/73) [,,-,,,] reported upstanding approval by a review lath.

Surveys

No universal survey was plant, as all the studies deployed a distinct survey. Ge et al [] used the system usability scale and the framework of Knijnenburg et al [] to explain the user experience of recommender systems. Esteban et al [] designed their own survey with 10 questions to inquire about user feel. Cerón-Rios [] relied on the ISO/IEC (International Organization of Standardization/International Electrotechnical Commission) 25000 standard to select 7 usability metrics to evaluate usability. Although most studies did not explicitly report the surveys used, user feel was a pop evaluation metric, as in the study by Wang et al []. Other metrics range from measuring user satisfaction [,] and perceived prediction accuracy [] (with 4 self-equanimous questions). Nurbakova et al [] combined data analytics with surveys to map their participants' psychological background, including orientations to happiness measured using the Peterson scale [], personality traits using the Mini-International Personality Item Pool [], and Fright of Missing Out based on the Przybylski scale [].

Single-Session Evaluations (User Studies)

A total of x studies recruited users and asked them to perform sure tasks in a unmarried session. Yang et al [] performed a 60-person user study to appraise its feasibility and effectiveness. Each participant was asked to rate meal recommendations relative to those made using a traditional survey-based approach. In a study by Gutiérrez et al [], fifteen users were asked to employ the health augmented reality assistant and mensurate the qualities of the recommender system, users' behavioral intentions, perceived usefulness, and perceived ease of utilise. Jiang and Xu [] performed thirty consultations and invited 10 evaluators majoring in medicine and information systems to obtain an average rating score and nDCG. Radha et al [] used comparative questions to evaluate the feasibility. Moreover, Cheng et al [] used 2 user studies to rank two degrees of compromise (Physician). A low Physician assigns more weight to the algorithm, and a loftier DOC assigns more than weight to the user's wellness perspective. Recommendations with a lower Md are more efficient for the user's wellness, but recommendations with a high DOC could convince users to believe that the recommended action is worth doing. Other approaches used are structured interviews [], ranking [,], request for unstructured feedback [,], and focus group discussions []. Finally, 3 studies [,,] evaluated their organization through a heuristic evaluation with expert users.

In the Wild

Only 2 studies tested their HRS into the wild recruited patients (people with a diagnosed health status) in their evaluation. Yom-Tov et al [] provided 27 sedentary patients with type 2 diabetes with a smartphone-based pedometer and a personal plan for physical activity. They assessed the effectiveness past computing the amount of activeness that the patient performed afterward the last message was sent. Lima-Medina et al [] interviewed 45 patients with cardiovascular issues later on a 6-month study catamenia to measure out (ane) social management results, (two) health care program results, and (3) recommendation results. Rist et al [] performed an in-situ evaluation in an apartment of an older couple and used the data logs to depict the usage but augmented the data with a structured interview.

Yang et al [] conducted a field study of 227 bearding users that consisted of a training stage and a testing stage to assess the prediction accuracy. Buhl et al [] created iii user groups co-ordinate to the recommender technique used and analyzed log data to compare the response charge per unit, open electronic mail rate, and consecutive log-in rate. Similarly, Huang et al [] compared the ratio of recommended doctors chosen and reserved past patients with the recommended doctors. Lin et al [] asked 6 participants to apply their HRSs for 5 weeks, measured system performance, studied user feedback to the recommendations, and concluded with an open-user interview. Finally, Ali et al [] asked 10 volunteers to use their weight direction systems for a couple of weeks. Notwithstanding, they do not focus on user-centric evaluation, as "merely a prototype of the [...] platform is implemented."

Rabbi et al [] followed a unmarried case with multiple baseline designs []. Unmarried-case experiments achieve sufficient statistical power with a big number of repeated samples from a single individual. Moreover, Rabbi et al [] argued that HRSs suit this requirement "since enough repeated samples can be nerveless with automated sensing or daily manual logging []." Participants were exposed to 2, 3, or 4 weeks of the control status. The written report ran for 7-9 weeks to compensate for the novelty effects. Food and exercise log data were used to mensurate changes in food calorie intake and calorie loss during exercise.

Randomized Controlled Trials

Only 6 studies followed an RCT approach. In the RCT past Bidargaddi et al [], a large group of patients (due north=192) and control group (n=195) were asked to use a web-based recommendation service for 4 weeks that recommended mental health and well-being mobile apps. Changes in well-being were measured using the Mental Health Continuum-Brusk Form []. The RCT by Sadasivam et al [] enrolled 120 electric current smokers (n=74) and control group (n=46) as a follow-upwardly to a previous RCT [] that evaluated their portal to specifically evaluate the HRS algorithm. Message ratings were compared between the intervention and control groups.

Cheung et al [] measured app loyalty through the number of weekly app sessions over a period of 16 weeks with 8057 users. In the study past Paredes et al [], 120 participants had to apply the HRS for at least 26 days. Self-reported stress assessment was performed before and later on the intervention. Agapito et al [] used an RCT with 40 participants to validate the sensitivity (true positive rate/[truthful positive charge per unit+simulated negative rate]) and specificity (true negative charge per unit/[truthful negative rate+false positive charge per unit]) of the DIETOS HRS. Finally, Luo et al [] performed a small-scale clinical trial for more than than 3 months (but did non study the number of participants). Their primary outcome measures included ii standard clinical claret tests: fasting blood glucose and laboratory-measured glycated hemoglobin, earlier and after the intervention.

Interface
Overview

Simply 47% (34/73) of the studies reported implementing a graphical user interface to communicate the recommended health items to the user. As illustrated in , 53% (18/34) use a mobile interface, usually through a mobile (web) app, whereas 36% (14/34) apply a web interface to testify the recommended items. Rist et al [] congenital a kiosk into older adults' homes, as illustrated in . Gutiérrez et al [] used Microsoft HoloLens to project good for you food alternatives in augmented reality surrounding a concrete object that the user holds, as shown in .

Table iii. Distribution of the interfaces used among the different health recommender systems (n=34).
Interface Study Full studies, n (%)
Mobile [seven,34,35,forty,44,48,56,58,66,69,77,78,82-84,86,88,97] xviii (53)
Web [9,15,37,41,45,49,61,70,73,75,79,85,ninety,95] 14 (41)
Kiosk [33] ane (3)
HoloLens [63] ane (3)
Figure 3. Rist et al installed a kiosk in the domicile of older adults as a direct interface to their wellness recommender arrangement.
View this figure
Figure 4. An example of the recommended healthy alternatives by Gutiérrez et al.
View this figure
Visualization

A total of 7 studies [,,,,,,] or approximately i-fourth of the studies with an interface included visualizations. However, the approach used was dissimilar for all studies, as shown in . Showing stars to show the relevance of a recommended item are only used past Casino et al [] and Gutiérrez et al []. Wayman and Madhvanath [] as well used bar charts to visualize the progress toward a health goal. They visualize the healthy proportions, that is, what the user should consume. Somewhat more complex visualizations are used by Ho and Chen [] who visualized the user'due south ECG zones. Paredes et al [] presented an emotion graph as an input screen. Rist et al [] visualized an example of how to perform the recommended activity.

Table four. Distribution of the visualizations used among the different wellness recommender systems (n=seven).
Visualization technique Report Total studies, n (%)
Bar charts Wayman and Madhvanath [37] and Gutiérrez et al [63] ii (29)
Heatmap Ho and Chen [88] 1 (14)
Emotion graph Paredes et al [34] ane (xiv)
Visual example of action Rist et al [33] ane (xiv)
Map Avila-Vazquez et al [79] 1 (xiv)
Star rating Casino et al [97] one (14)
Transparency

In the study by Lage et al [], participants expressed that:

they would like to have more control over recommendations received. In that sense, they suggested more than information regarding the reasons why the recommendations are generated and more options to appraise them.

A total of vii studies [,,,,,,] explained the reasoning behind recommendations to terminate users at the user interface. Gutiérrez et al [] provided recommendations for healthier food products and mentioned that the items () are based on the users' profile. Ueta et al [] explained the relationship between the recommended dishes and a person's health weather condition. For example, a person with acne can see the following text: "15 dishes that contained Pantothenic acrid thought to exist effective in acne a lot became a hitting" []. Li and Kong [] showed personalized recommended health deportment in a message centre. Colour codes are used to differentiate between reminders, missed warnings, and recommendations. Rabbi et al [] showed tailored motivational messages to explicate why activities are recommended. For example, when the activity walk near East Ave is recommended, the app shows the boosted message:

1082 walks in 240 days, 20 mins of walk everyday. Each walk nearly iv min. Permit u.s. become xx mins or more than walk here today
[ vii ]

Wayman and Madhvanath [] first visualized the user'south personal nutrition profile and used the lower part of the interface to explain why the particular was recommended. They provided an illustrative instance of spaghetti squash. The caption shows that:

This product is loftier in Dietary_fiber, which you lot could swallow more than of. Attempt to get 3 servings a calendar week
[ 37 ]

Guo et al [] recommended doctors and showed a horizontal bar chart to visualize the user'due south values compared with the boilerplate values. Finally, Bidargaddi et al [] visualized how the recommended app overlaps with the goal ready by the users, as illustrated in .

Effigy 5. A screenshot from the health recommender system of Bidargaddi et al. Note the blue tags illustrating how each recommended app matches the users' goals.
View this figure

Principal Findings

HRSs encompass a multitude of subdomains, recommended items, implementation techniques, evaluation designs, and ways of communicating the recommended items to the target user. In this systematic review, we amassed the recommended items into four groups: lifestyle, nutrition, full general wellness care information, and specific wellness conditions. There is a clear trend toward HRSs that provide well-being recommendations but do not straight intervene in the user's medical status. For instance, about 70% (50/73; lifestyle and diet) focused on no strict medical recommendations. In the lifestyle grouping, physical activities (ten/24, 42%) and advice on how to potentially change beliefs (7/24, 29%) were recommended near often. In the nutrition group, these recommendations focused on nutritional advice (8/26, 31%), diets (7/26, 27%), and recipes (7/26, 27%). A similar trend was observed in the health care data grouping, where HRSs focused on guiding users to the appropriate environments such every bit hospitals (five/23, 22%) and medical professionals (4/23, 17%) or on helping users notice qualitative information (5/23, 22%) on validated sources or from experiences by similar users and patients on health care forums (3/23, 13%). Thus, they only provide general information and practise not intervene past recommending, for case, changing medication. Finally, when HRSs targeted specific wellness conditions, they recommended nonintervening actions, such every bit meditation sessions [] or books to read [].

Although collaborative filtering is ordinarily the most used technique in other domains [], hither only iii included studies reported the employ of a collaborative filtering approach. Moreover, 43% (32/73) of the studies applied a hybrid approach, showing that HRS data sets might need special attending, which might also be the reason why all 73 studies used distinct data sets. In improver, the HRS evaluations varied greatly and were divided over evaluations where the end user was involved and evaluations that did not evolve users (offline evaluations). Only 47% (34/73) of the studies reported implementing a user interface to communicate recommendations to the user, despite the need to show the rationale of recommendations, every bit echoed by many researchers and practitioners []. Moreover, only 15% (7/47) included a (basic) visualization.

Unfortunately, this full general lack of understanding on how to written report HRSs might innovate researcher bias, as a researcher is currently completely unconstrained in defining what and how to measure the added value of an HRS. Therefore, farther fence in the health recommender community is needed on how to ascertain and measure the impact of HRSs. On the basis of our review and contribution to this word, we put forrad a fix of essential information that researchers should study in their studies.

Considerations for Do

The previously discussed results have straight implications in do and provide suggestions for hereafter research. shows a reference frame of these requirements that can be used in future studies as a quality assessment tool.

Figure 6. A reference frame to report wellness recommender system studies. On the basis of the results of this written report, we propose that it should be clear what and how items are recommended (A), who the target user is (B), which data are used (C), and which recommender techniques are applied (D). Finally, the evaluation design should be reported in item (Due east).
View this figure
Ascertain the Target User

As shown in this review, HRSs are used in a plethora of subdomains and each domain has its own experts. For example, in nutrition, the expert is most probable a dietician. Withal, the user of an HRS is usually a layperson without the knowledge of these domain experts, who often accept unlike viewing preferences []. Furthermore, each user is unique. All individuals accept idiosyncratic reasons for why they act, think, deport, and feel in a certain style at a specific stage of their life []. Not everybody is motivated by the same elements. Therefore, information technology is of import to know the target user of the HRS. What is their previous knowledge, what are their goals, and what motivates them to human action on a recommended item?

Testify What Is Recommended (and How)

Researchers take become aware that accurateness is not sufficient to increase the effectiveness of a recommender system []. In contempo years, research on human being factors has gained attending. For example, He et al [] surveyed 24 existing interactive recommender systems and compared their transparency, justification, controllability, and diversity. Still, none of these 24 papers discussed HRSs. This indicates the gap between HRSs and recommender systems in other fields. Human factors have gained interest in the recommender community by "combining interactive visualization techniques with recommendation techniques to support transparency and controllability of the recommendation process" []. Notwithstanding, in this report, only ten% (7/73) explained the rationale of recommendations and only x% (7/73) included a visualization to communicate the recommendations to the user. We do non argue that all HRSs should include a visualization or an caption. Nevertheless, researchers should pay attention to the delivery of these recommendations. Users need to understand, believe, and trust the recommended items before they can human activity on it.

To compare and assess HRSs, researchers should unambiguously report what the HRS is recommending. After all, typical recommender systems act like a blackness box, that is, they show suggestions without explaining the provenance of these recommendations []. Although this approach is suitable for typical e-commerce applications that involve picayune take chances, transparency is a core requirement in college risk application domains such as health []. Users need to empathise why a recommendation is made, to assess its value and importance []. Moreover, health information can be cumbersome and not always easy to understand or situate inside a specific health condition []. Users need to know whether the recommended particular or activity is based on a trusted source, tailored to their needs, and actionable [].

Study the Data Set Used

All 73 studies used a distinct information set. Furthermore, some studies combine data from multiple databases, making information technology even more difficult to estimate the quality of the data []. Even so, most studies use cocky-generated data sets. This makes it difficult to compare and externally validate HRSs. Therefore, we argued that researchers should clarify the data used and potentially share whether these data are publicly available. However, in health information are often highly privacy sensitive and cannot exist shared amongst researchers.

Outline the Recommender Techniques

The results testify that at that place is no panacea for which recommender technique to use. The included studies differ from logic filters to traditional recommender techniques, such as collaborative filtering and content-based filtering to hybrid solutions and self-adult algorithms. Withal, with 44% (32/73), there is a strong trend toward the use of hybrid recommender techniques. The low number of collaborative filter techniques might be related to the fact that the evaluation sample sizes were also relatively low. Unfortunately, some studies accept non fully disclosed the techniques used and only reported on the main algorithm used. It is remarkable that studies published in loftier-impact journals, such as studies past Bidargaddi et al [] and Cheung et al [], did not provide data on the recommender technique used. Nonetheless, disclosing the recommender technique allows other researchers not just to build on empirically tested technologies but also to verify whether key variables are included []. User data and behavior information tin be identified to augment theory-based studies []. Researchers should prove that the algorithm is capable of recommending valid and trustworthy recommendations to the user based on their bachelor data set.

Elaborate on the Evaluation Protocols

HRSs can be evaluated using different evaluation protocols. Nonetheless, the protocol should be outlined mainly past the inquiry goals of the authors. On the basis of the papers included in this study, nosotros differentiate between the two approaches. In the first approach, the authors aim to influence their users' health, for instance, by providing personalized diabetes guidelines [] or prevention exercises for users with depression back pain []. Therefore, the cease user should always be involved in both the design and evaluation processes. Even so, only 8% (6/73) performed an RCT and xiv% (ten/73) deployed their HRS in the wild. This lack of user interest has been noted previously by researchers and has been identified equally a major challenge in the field [,]. Nonetheless, in other domains, such as job recommenders [] or agriculture [], user-centered blueprint has been proposed as an important methodology in the pattern and evolution of tools used by terminate users, with the purpose of gaining trust and promoting technology credence, thereby increasing adoption with end users. Therefore, we recommend that researchers evaluate their HRSs with actual users. A potential model for a user-centric approach to recommender organization evaluation is the user-centric framework proposed by Knijnenburg et al [].

Research protocols need to be elaborated and approved past an ethical review lath to foreclose any impact on users. Authors should report how they informed their users and how they safeguarded the privacy of the users. This is in line with the mod journal and conference guidelines. For example, editorial policies of the Journal of Medical Internet Research state that "when reporting experiments on human subjects, authors should bespeak IRB (Institutional Rese[a]rch Board, also known every bit REB) approving/exemption and whether the procedures followed were in accordance with the ethical standards of the responsible committee on homo experimentation" []. Notwithstanding, only 12% (9/73) reported their approval by an ethical review board. Acquiring review lath blessing will assistance the field mature and transition from small incremental studies to larger studies with representative users to make more reliable and valid findings.

In the second approach, the authors aim to pattern a amend algorithm, where better is again defined by the authors. For instance, the algorithm might perform faster, exist more accurate, and be more than efficient in computing ability. Although the F 1 score, the hateful absolute error, and nDCG are well defined and known within the recommender domain, other parameters are more ambiguous. For example, the performance or effectiveness tin be assessed using unlike measurements. Still, a wellness parameter tin be monitored, such as the duration that a user remains within good for you ranges []. Furthermore, it could be a predictive parameter, such every bit an improved precision and retrieve every bit a proxy for performance []. Unfortunately, this difference makes it difficult to compare wellness recommendation algorithms. Furthermore, this inconsistency in measurement variables makes it infeasible to report in this systematic review which recommender techniques to use. Therefore, we debate that HRS algorithms should always be evaluated for other researchers to validate the results, if needed.

Limitations

This study has some limitations that touch its contribution. Although an extensive scope search was conducted in scientific databases and near relevant health care informatic journals, some relevant literature in other domains might have been excluded. The keywords used in the search string could take impacted the results. Showtime, nosotros did non include domain-specific constructs of wellness, such as asthma, pregnancy, and iron deficiency. Many studies may implicitly report salubrious computer-generated recommendations when they research the affect of a new intervention. In these studies, nevertheless, building an HRS is often not their goal and, therefore, was excluded from this study. Second, we searched for papers that reported studying an HRS; nonincluded studies might have built an HRS but did not report information technology every bit such. Because our RQs, we deemed information technology of import that authors explicitly reported their work as a recommender organisation. To conclude, in this study, we provide a big cross-domain overview of health recommender techniques targeted to laypersons and deliver a set of recommendations that could assist the field of HRS mature.

Conclusions

This study presents a comprehensive report on the use of HRS beyond domains. We accept discussed the different subdomains HRS applied in, the different recommender techniques used, the different manners in which they are evaluated, and finally, how they nowadays the recommendations to the user. On the basis of this analysis, we have provided enquiry guidelines toward a consistent reporting of HRSs. We found that although most applications are intended to improve users' well-existence, there is a significant opportunity for HRSs to inform and guide users' health actions. Although many of the studies nowadays a lack of a user-centered evaluation approach, some studies performed total-scale RCT evaluations or elaborated in the wild studies to validate their HRS, showing the field of HRS is slowly maturing. On the basis of this written report, we argue that it should always exist articulate what the HRS is recommending and to whom these recommendations are for. Graphical assets should be added to show how recommendations are presented to users. Authors should likewise report which data sets and algorithms were used to calculate the recommendations. Finally, detailed evaluation protocols should be reported.

We conclude that the results motivate the cosmos of richer applications in future design and development of HRSs. The field is maturing, and interesting opportunities are existence created to inform and guide health actions.

Acknowledgments

This work was part of the research project PANACEA Gaming Platform with projection HBC.2016.0177, which was financed by Flanders Innovation & Entrepreneurship and enquiry projection IMPERIUM with research grant G0A3319N from the Research Foundation-Flemish region (FWO) and the Slovene Research Agency grant ARRS-N2-0101. Project partners were BeWell Innovations and the University Infirmary of Antwerp.

Conflicts of Interest

None alleged.






DOC: degrees of compromise
HRS: health recommender arrangement
ISO/IEC: International Organization of Standardization/International Electrotechnical Commission
nDCG: normalized discounted cumulative gain
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCT: randomized controlled trial
RQ: enquiry question


Edited by M Eysenbach; submitted 29.01.20; peer-reviewed by A Calero Valdez, J Jiang; comments to author 10.03.20; revised version received 20.05.20; accepted 24.05.21; published 29.06.21

Copyright

©Robin De Croon, Leen Van Houdt, Nyi Nyi Htun, Gregor Štiglic, Vero Vanden Abeele, Katrien Verbert. Originally published in the Journal of Medical Net Research (https://www.jmir.org), 29.06.2021.

This is an open-admission commodity distributed nether the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/past/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Periodical of Medical Net Research, is properly cited. The complete bibliographic information, a link to the original publication on https://world wide web.jmir.org/, too as this copyright and license data must be included.


bairdporybouted.blogspot.com

Source: https://www.jmir.org/2021/6/e18035/