The call for personalized medicine highlights the need for personalized (N-of-1) trials to find what treatment works best for individual patients. Conventional (between-subject) randomized controlled trials (RCT) yield effects for the ‘average patient,’ but a personalized trial administers all treatments within-subject, so benefits or harms to the individual patient can be identified. The design and analysis of personalized trials involve different strategies from the conventional RCT. These include how to adjust for any carryover effects from one intervention to another, how to handle missing data, and how to provide patients with insight into their data. In addition, a comprehensible report about trial results should be created for each patient and their clinician to facilitate their decision-making. This article describes strategies to address these design and analytic issues, and introduces an R shiny app to facilitate their solution, to explain the use of each of the design and statistical strategies. To illustrate, we also provide a concrete example of a personalized trial series designed to increase activity (i.e., walking steps) in patients with chronic lower back pain (CLBP).
Keywords: personalized medicine, fitbit data, imputation, computing platforms
Clinicians often rely on evidence from conventional between-subject randomized controlled trials (RCTs) for guidance to treat their patients. Such trials randomly allocate research participants to two or more treatment arms and/or a control group. At the conclusion of the trial, the average effects for each treatment group are computed and then compared for a measured outcome, such as symptoms, pain, health quality of life, and so on. Assuming there is minimal variability in response to treatment, the best prediction of treatment benefit for an individual patient can be estimated from the overall trial—the main effect. However, more often heterogeneity of treatment effects (HTEs) (i.e., effect variability across patients) is evident among different patients and subgroups participating in conventional RCTs (Gabler et al., 2011; Guyatt et al., 1988; Vijan, 2020). This variability often forces clinicians to make educated guesses about the optimal treatment for specific patients (Davidson et al., 2018). Guyatt et al. (1988) proposed a personalized (N-of-1) trial design as an alternative to the conventional RCT to identify the most beneficial therapy for each patient. In a personalized trial, the individual patient is the sole unit and receives all relevant treatment/control conditions successively (Guyatt et al., 1986). By comparing the effects of one or more treatments within-subject, the comparative benefits (or potential harms) provided by each treatment can be discerned. Furthermore, the patients can receive their trial data in partnership with their clinician, so decisions can be made about which treatment to choose going forward.
Besides the basic differences between traditional and personalized trials described above, such trials diverge in other significant ways. The primary endpoint in a conventional RCT is assessed at the end of the study period. In personalized trials, the primary outcome is measured periodically throughout the trial because the outcomes must be measured in response to each treatment. It follows that traditional RCTs typically collect a few outcome measures from many people, whereas a personalized trial collects many, frequent measurements from a single individual. Another difference is that personalized trials are most appropriate for medical conditions or symptoms that should show rapid change when treatments are introduced or withdrawn. Thus, prevention efforts for relieving patients with chronic, stable, or slowly progressive conditions are the best candidates for personalized trials. (Acute or quickly deteriorating conditions are inappropriate, Epstein & Dallery, 2022; irreversible interventions are ruled out because of the within-subject nature of personalized trials.)
Despite their advantages, personalized trials have not received substantial uptake, largely because clinicians and researchers have lacked an automated, sustainable, user-friendly technology platform to permit the facile conduct of such trials. The situation has changed with the recent advent of health apps, texting, remote sensors, and automated platforms that enable the virtual delivery of instructions, interventions, and data collection. These state-of-the-art remote technologies facilitate convenient recruitment, screening, and participation in personalized trials at the patient’s home or work.
The design, implementation, and analysis of personalized trial data require special strategies that differ from those used in conventional RCT design. First, the researcher must know how to statistically power the personalized trial to detect within-subject differences in the effects of each treatment. In contrast to the conventional RCT, the statistical power of a personalized trial refers to the number of assessments and treatment periods for each patient (Freeman, 1989; Willan & Pater, 1986). Second, the researcher should determine the HTE associated with the type of treatment and patient. The initial step is to ascertain how large a series of personalized (N-of-1) trials should be conducted. Then, a comparison is made about how each patient’s outcomes differ from effects pooled across all patients in the series. Graphical displays or statistical tests of the data can establish the HTEs (Araujo et al., 2016; Cheung et al., 2020). Procedures to calculate these values will be described below.
Third, special strategies should be adopted, because treatment arms in a personalized trial are not independent; participants receive all treatments. Also, the outcome variable is collected very frequently or continuously throughout the trial. Thus, the outcome data constitute a time series, which should be analyzed using a correlation structure (Shaffer et al., 2018). An additional complication is that exposure to all treatments may produce carryover effects (i.e., when a treatment block influences the outcomes of a subsequent block). One option to minimize carryover effects is to insert no-treatment breaks between different conditions. However, any residual carryover effects may lead to conservative differences in treatment effects because of type-II error inflation (Alemayehu et al., 2017). A statistical strategy, described below, adjusts for carryover effects (Shaffer et al., 2018) and evaluates the effect of each treatment the patient receives on the outcome variable.
Another consideration is that personalized trials require frequent collection of the outcome variables for weeks or months. Both patient non-adherence and technological failures may result in missing data. Some researchers prefer to conduct statistical analyses of the treatment effects even if there are missing data. However, other researchers prefer to use data imputation strategies. Below we will describe an analytic procedure to impute missing data.
Fifth, a major aim of the personalized trial is the provision of feedback to the patient about their results after the trial. This feedback should be reported in an easy-to-comprehend format that permits the patient and their clinician to identify which treatment provided the greatest benefits and fewest side effects. Below, we will describe an R Shiny app that provides a user-friendly, graphical display of the personalized trial results to the patient and their clinician. This app also computes within-subject effects for changes in the outcome as a function of each intervention and time period, adjusts for carryover effects, and imputes missing values, if the researcher prefers that.
To demonstrate the concrete use of these strategies, we describe a series of personalized (N-of-1) trials evaluating the effects of two treatments–yoga and massage–to decrease sedentary behavior in chronic lower back pain patients (CLBP). More than 25% of the U.S. adult population is affected by lower back pain (Deyo et al., 2006), which is the fifth most common cause of physician visits (Khan et al., 2014). CLBP leads to limitations in daily activities and low levels of physical exercise (Chou, 2010). For example, a common consequence of CLBP is a significant reduction in everyday walking. A substantial body of epidemiological evidence shows that physical inactivity is a risk factor for cardiovascular diseases, stroke, and so on.
To manage their pain, those with CLBP often take medication; however, recent clinical guidelines recommend less use of pharmacotherapy for CLBP management (Qaseem et al., 2017) because of the dangers of addiction and accidental overdose of opioid treatment (Dowell et al., 2016). The Centers for Disease Control and Prevention (CDC) have called for alternative approaches to manage chronic pain. Both yoga and massage have been proposed as treatment options.
In our illustrative example, a personalized trial scenario for treating CLBP administered yoga and massage in a within-subject cross-over design. The outcome variable was the number of daily steps (Hendrick et al., 2010), assessed objectively with a wearable Fitbit activity monitor. The primary hypothesis was that both yoga and massage treatments would significantly increase the CLBP patient’s walking steps over their baseline compared to usual care. We had no hypothesis about whether massage versus yoga had differential efficacy. In light of the HTE found with other therapies, we expected the comparative effectiveness and degree of response to yoga and massage would vary across patients.
In Section 2.1, we will briefly go over the design of the CLBP trial. In Section 2.2, we will expand on the imputation of missing data in the step count data. The analysis of the step count data will be summarized in Section 2.3. In Section 3, we discuss the outcome of the trial and compare the individual results with a pooled analysis. The R shiny app will be described in detail in Section 4. We conclude this article with a discussion in Section 5.
A series of 60 randomized personalized trials were conducted. This means each of 60 patients suffering from CLBP participated in their within-subject, multiple time-period cross-over trial testing the effects of Swedish massage and yoga versus usual care on physical activity (i.e., walking steps). All patients received a series of treatments of yoga and massage through Zeel (a commercial wellness service). Zeel allows participants to book in-home one-on-one yoga sessions with a certified yoga instructor. Yoga poses were selected based on those previously used by Sherman et al. (2005) in a study assessing the effect on CLBP. Patients could also book in-home massages with licensed massage therapists through Zeel.
Patients who met inclusion criteria underwent a baseline assessment period of 2 weeks (see Figure 1). During this period, patients were asked to wear a Fitbit Charge 3 activity-monitoring device that collected minute-by-minute step count data 24 hours a day. During the 2-week baseline period, patients were discouraged from receiving yoga and/or massage treatments. Adherence to wearing the Fitbit device was assessed during this 2-week baseline period. Those patients who did not achieve a minimum of 80% adherence (at least 11 of 14 days) during baseline were not continued to the intervention phases. The remaining patients were randomized to one of two different treatment sequences, which are depicted Figure 1. Two different sequences of the treatments were created to achieve balance in the assignment of treatments over time so that treatment effect estimates were unbiased by time-dependent confounders. Patients were randomized 1:1 to the treatment sequences (i.e., 30 participants in each treatment sequence).
As shown in Figure 1, the treatment sequences were designed to deliver two intervention arms (massage and yoga) and a usual care arm (no intervention) in a multiple-crossover design of six treatment blocks. Each treatment block lasted a total of 2 weeks. During intervention treatment blocks, patients were instructed to request Zeel to book two 1-hour sessions of in-home Swedish massage (massage treatment blocks) or two 1-hour sessions of in-home yoga (yoga treatment blocks) each week, at least 48 hours apart. No treatment was provided to patients during the usual care treatment blocks; instead, patients were asked to practice the techniques they normally used to manage their CLBP. Participants were discouraged from receiving additional massage or yoga sessions outside of the eight massage sessions and eight yoga sessions delivered throughout the trial.
Study participants were primarily recruited via emails sent to all employees at Northwell Health, the largest private health care employer in New York state. The email invited people with CLBP to participate in a personalized trial. Other recruitment strategies included referrals from Northwell Occupational Health Services (OHS), social media advertising, flyers distributed to Northwell Health facilities, and information presented at Northwell Health Wellness events. Those who were interested were asked to complete an online screening measure about both inclusion and exclusion criteria for the trial. If the individual was deemed to be eligible, an electronic consent form and additional information were provided.
The inclusion criteria for a participant included the following:
Fluent in English
Able to regularly access an email account and a smartphone
Experiencing symptoms of lower back pain for
A self-reported pain intensity
Able to receive therapeutics (2x per week; between 8 am and 10 pm)
Persons who met any of the following criteria were excluded:
History of spinal surgery
Complex back pain
History of a serious mental health condition or psychiatric disorder
History of opioid use disorder or current opioid users
History of treatment for any substance abuse
Current physical activity restrictions or previously advised that yoga or massage is unsafe for their condition
Planned travel outside of the United States within the treatment period
Planned surgery/procedures within 6 months of recruitment
Each participant was asked to wear a Fitbit Charge 3 device 24 hours a day. For each participant, the Fitbit data were recorded in a minute-by-minute file with step count, heart rate, date, and time (1,440 minutes per day for 14 weeks). Step count is recorded as 0 instead of ‘NA’ during a non-wear period or if the device battery is drained. However, the heart rate data is recorded as ‘NA’ if a participant is not wearing the device or if the device battery is dead. Even though participants had a high adherence rate, wearing the Fitbit at least 80% of the time, the imputation of missing step count is vital for the analysis in the next section. Thus, we present a model for the imputation of missing step count data.
However, when a participant is not wearing the device at time
We fit model (1) for complete data (non-missing) to obtain the regression coefficients and use the regression coefficients to impute the missing data. Finally, we fit the penalized spline curve to improve the smoothness of the imputed step counts.
The treatment arms are not independent in personalized trials, unlike in traditional RCTs. In personalized RCTs, the data is collected continuously throughout the trial, thus time-series data need to be analyzed using a correlation structure (Shaffer et al., 2018).
We analyzed the daily step count data using generalized least square (GLS) regression. The GLS estimator of linear regression is a generalization of the ordinary least square (OLS) estimator. When the OLS estimator violates one of the assumptions of the Gauss-Markov theorem, namely that of equal variances, the GLS estimator is used. Note that,
from Section 2.2, where
with the covariance matrix
However, in time-series data, the errors from the regression model are unlikely to be independent. Generalized least-squares regression extends OLS estimation of the standard linear model by providing for possibly unequal error variances and correlations between different errors. Let
There is an invertible matrix, such that
We can fit several time-series models with GLS or generalized estimating equations (GEE). The autoregressive model with order 1 [AR(1)] is one of the most frequently used models in personalized trials (Chen & Chen, 2014; Kronish et al., 2019). (This allows for correlation between successive data points but in such a way that given the result of the most recent data point for a subject, those further back have no predictive value.) In an AR(1) model, the regression errors are assumed to be stationary. The errors are assumed to have the same expectation and the same variance:
In addition, the model postulates that correlations diminish as observations are farther apart in a specific form:
If both the values of
Although the original goal of the CLBP study was to randomize 60 participants to receive the protocol, all study activities were halted in March 2020 as a result of the COVID-19 crisis in New York. The analysis below describes the data of 26 study participants that were able to complete their intervention treatment blocks before the study was ended for infection-control purposes. Fourteen and 12 subjects were randomized to the left and right sides of the flowchart (Figure 1).
Figure 2 displays the summarized treatment effect obtained from daily step counts after imputation using GLS regression for all 26 patients. About 85% (22 of the 26) of the participants had no difference in daily step counts after imputation for both yoga and massage compared to usual care and yoga compared to massage (i.e., 95% [confidence interval] CI of
Yoga vs Usual Care
Massage vs Usual Care
Massage vs Yoga
None of the participants had a higher daily step count (
In Section 3, we focused on the summarized treatment effects obtained from the study and highlighted the HTEs for both yoga and massage compared to usual care. However, in personalized trials, the primary goal is to provide trial data for each patient to evaluate all treatment effects and select the best treatment option going forward. We developed a user-friendly R shiny app for reporting all the analyses for the step count data from the CLBP trial for each patient (video attached). The R shiny app is available at https://roadmap2health.io/hdsr/fitbit-shiny/ and the R code for the R Shiny app is available at https://github.com/ROADMAP-Columbia/fitbit-shiny (R Core Team, 2022). The Shiny app reports treatment effects from the GLS regression (table and forest plot), line graph, and boxplot of the step counts for each patient as shown in Figure 3. The analyses (gls regression, line graph, and boxplot) can be easily updated for both imputed and non-imputed daily step counts. The R shiny app can provide insight to patients about their health data.
The R shiny app can be used to generate analyses and descriptive statistics for the effectiveness of the outcomes of the two treatments utilized in this trial. By displaying results in this manner, research team members can easily identify the most effective treatments for each participant simply by clicking on the participant’s ID number within the R shiny app. Without the flexibility of this R shiny app, the research coordinator would need to consult with a statistician or attempt to run analyses themselves to generate these results. This would lead to more effort and the potential for errors.
In addition, the R shiny app runs based on the linked participant data set. This allows the analysis results to be continuously updated as participant data is added, easily generating both interim and final results for each participant through the course of the study. Further, the analyses utilized in this R shiny app can be easily modified to fit various N-of-1 designs to account for variations in the number of treatment blocks, duration of treatment, and other design elements essential to N-of-1 trials.
In this article, we presented the design and analysis for a personalized trial treating CLBP patients who were administered yoga and massage in a within-subject cross-over design. The discussion focused on recruitment, including inclusion and exclusion criteria. The imputation of minute-by-minute Fitbit step data was explained and how personalized trials are typically analyzed using GLS because outcomes are dependent and susceptible to carryover effects. A summary of the individual trial data and forest plots (Figure 2 and Table 1) highlighted the HTE across individuals and indicates why N-of-1 trials can be more informative to patients and clinicians. Finally, we presented the R shiny app that can deliver statistical analyses to researchers. The app can provide a detailed report about trial results to each patient to learn about the effects of the treatments and to help them decide about the best option for them.
We acknowledge some limitations of our approach. We opted to use GLS with a temporal correlation of autoregressive order of 1 [AR(1)] over GEE. The treatment effects from both models, GLS with a temporal correlation of autoregressive order of 1 [AR(1)] and GEE with unstructured correlation, produced almost identical results. The Poisson distribution used to model the imputation of (minute-by-minute) step count may not always be the ideal model for all types of data. A negative binomial distribution captures overdispersion. That is, a negative binomial distribution allows the conditional variance of the outcome to be greater than its conditional mean, which offers greater flexibility for model fitting (Chandereng & Gitter, 2020). The Poisson distribution is also not ideal for capturing zero scores, which can be handled by a zero-inflated Poisson model (Cheung et al., 2018). In the future, the imputation of the step count can be improved with semiparametric models.
The R shiny app can be used to securely transfer trial results to the participants. The ability to collect data instantly using wearable devices has eased the implementation of personalized trials. In the near future, we are planning to streamline the data analysis process that provides the trial results instantly once the trial is completed. We also plan to build a simplified version of the shiny app that helps trial participants understand their data and the effect of each treatment for additional outcomes of interest (e.g., pain) that will be measured (one app for everything). The ultimate goal is to provide a tool for the patients to gain insight into their data and select the best treatment based on a data-driven approach. The importance of analytics and computing platforms to understand health data cannot be overemphasized.
The author thanks Dr. Ying Kuen Cheung, two anonymous reviewers, and the editor for constructive comments and references which helped improve the paper.
This work was supported by grants R01LM012836 from the National Library of Medicine of the National Institutes of Health and P30AG063786 from the National Institute on Aging of the National Institutes of Health. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. The views expressed in this paper are those of the authors and do not represent the views of the National Institutes of Health, the U.S. Department of Health and Human Services, or any other government entity.
Alemayehu, C., Mitchell, G., Aseffa, A., Clavarino, A., McGree, J., & Nikles, J. (2017). A series of n-of-1 trials to assess the therapeutic interchangeability of two enalapril formulations in the treatment of hypertension in Addis Ababa, Ethiopia: Study protocol for a randomized controlled trial. Trials, 18(1), Article 470. https://doi.org/10.1186/s13063-017-2212-0
Amtmann, D., Cook, K. F., Jensen, M. P., Chen, W.-H., Choi, S., Revicki, D., Cella, D., Rothrock, N., Keefe, F., Callahan, L., & Lai, J.-S. (2010). Development of a PROMIS item bank to measure pain interference. Pain, 150(1), 173–182. https://doi.org/10.1016/j.pain.2010.04.025
Araujo, A., Julious, S., & Senn, S. (2016). Understanding variation in sets of N-of-1 trials. PloS One, 11(12), Article e0167167. https://doi.org/10.1371/journal.pone.0167167
Chandereng, T., & Gitter, A. (2020). Lag penalized weighted correlation for time series clustering. BMC Bioinformatics, 21(1), Article 21. https://doi.org/10.1186/s12859-019-3324-1
Chen, X., & Chen, P. (2014). A comparison of four methods for the analysis of N-of-1 trials. PloS One, 9(2), Articlee87752. https://doi.org/10.1371/journal.pone.0087752
Cheung, Y. K., Hsueh, P.-Y. S., Ensari, I., Willey, J. Z., & Diaz, K. M. (2018). Quantile coarsening analysis of high-volume wearable activity data in a longitudinal observational study. Sensors, 18(9), Article 3056. https://doi.org/10.3390/s18093056
Cheung, Y. K., Wood, D., Zhang, K., Ridenour, T. A., Derby, L., St Onge, T., Duan, N., Duer-Hefele, J., Davidson, K. W., Kronish, I., & Moise, N. (2020). Personal preferences for personalised trials among patients with chronic diseases: An empirical Bayesian analysis of a conjoint survey. BMJ Open, 10(6), Article e036056. http://doi.org/10.1136/bmjopen-2019-036056
Chou, R. (2010). Pharmacological management of low back pain. Drugs, 70(4), 387–402. https://doi.org/10.2165/11318690-000000000-00000
Davidson, K. W., Cheung, Y. K., McGinn, T., & Wang, Y. C. (2018, December 10). Expanding the role of N-of-1 trials in the precision medicine era: Action priorities and practical considerations. NAM Perspectives. https://nam.edu/expanding-the-role-of-n-of-1-trials-in-the-precision-medicine-era-action-priorities-and-practical-considerations/
Deyo, R. A., Mirza, S. K., & Martin, B. I. (2006). Back pain prevalence and visit rates: Estimates from US national surveys, 2002. Spine, 31(23), 2724–2727. https://doi.org/10.1097/01.brs.0000244618.06877.cd
Dowell, D., Haegerich, T. M., & Chou, R. (2016). CDC guideline for prescribing opioids for chronic pain—United States, 2016. JAMA, 315(15), 1624–1645. https://doi.org/10.1001/jama.2016.1464
Epstein, L. H., & Dallery, J. (2022). Family of single-case experimental designs. Harvard Data Science Review, (Special Issue 3). https://doi.org/10.1162/99608f92.ff9300a8
Freeman, P. (1989). The performance of the two-stage analysis of two-treatment, two-period crossover trials. Statistics in Medicine, 8(12), 1421–1432. https://doi.org/10.1002/sim.4780081202
Gabler, N. B., Duan, N., Vohra, S., & Kravitz, R. L. (2011). N-of-1 trials in the medical literature: A systematic review. Medical Care, 49(8), 761–768. https://doi.org/10.1097/mlr.0b013e318215d90d
Guyatt, G., Sackett, D., Adachi, J., Roberts, R., Chong, J., Rosenbloom, D., & Keller, J. (1988). A clinician’s guide for conducting randomized trials in individual patients. CMAJ: Canadian Medical Association Journal, 139(6), 497–503. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1268200/
Guyatt, G., Sackett, D., Taylor, D. W., Ghong, J., Roberts, R., & Pugsley, S. (1986). Determining optimal therapy—randomized trials in individual patients. New England Journal of Medicine, 314(14), 889–892. https://doi.org/10.1056/nejm198604033141406
Hendrick, P., Te Wake, A., Tikkisetty, A., Wulff, L., Yap, C., & Milosavljevic, S. (2010). The effectiveness of walking as an intervention for low back pain: A systematic review. European Spine Journal, 19(10), 1613–1620. https://doi.org/10.1007/s00586-010-1412-z
Khan, I., Hargunani, R., & Saifuddin, A. (2014). The lumbar high-intensity zone: 20 years on. Clinical Radiology, 69(6), 551–558. https://doi.org/10.1016/j.crad.2013.12.012
Kronish, I. M., Cheung, Y. K., Julian, J., Parsons, F., Lee, J., Yoon, S., Valdimarsdottir, H., Green, P., Suls, J., Hershman, D. L., & Davidson, K. W. (2019). Clinical usefulness of bright white light therapy for depressive symptoms in cancer survivors: Results from a series of personalized (N-of-1) trials. Healthcare, 8(1), Article 10. https://doi.org/10.3390/healthcare8010010
Qaseem, A., Wilt, T. J., McLean, R. M., Forciea, M. A., & Clinical Guidelines Committee of the American College of Physicians; Denberg, T. D., Barry, M. J., Boyd, C., Chow, R. D., Fitterman, N., Harris, R. P., Humphrey, L. L., & Vijanm, S. (2017). Noninvasive treatments for acute, subacute, and chronic low back pain: A clinical practice guideline from the American college of physicians. Annals of Internal Medicine, 166(7), 514–530. https://doi.org/10.7326/m16-2367
R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org
Revicki, D. A., Chen, W.-H., Harnam, N., Cook, K. F., Amtmann, D., Callahan, L. F., Jensen, M. P., & Keefe, F. J. (2009). Development and psychometric analysis of the PROMIS pain behavior item bank. Pain, 146(1–2), 158–169. https://doi.org/10.1016/j.pain.2009.07.029
Shaffer, J. A., Kronish, I. M., Falzon, L., Cheung, Y. K., & Davidson, K. W. (2018). N-of-1 randomized intervention trials in health psychology: A systematic review and methodology critique. Annals of Behavioral Medicine, 52(9), 731–742. https://doi.org/10.1093/abm/kax026
Sherman, K. J., Cherkin, D. C., Erro, J., Miglioretti, D. L., & Deyo, R. A. (2005). Comparing yoga, exercise, and a self-care book for chronic low back pain: A randomized, controlled trial. Annals of Internal Medicine, 143(12), 849–856. https://doi.org/10.7326/0003-4819-143-12-200512200-00003
Vijan, S. (2020). Evaluating heterogeneity of treatment effects. Biostatistics & Epidemiology, 4(1), 98–104. https://doi.org/10.1080/24709360.2020.1724003
Willan, A. R., & Pater, J. L. (1986). Carryover and the two-period crossover clinical trial. Biometrics, 42(3), 593–599. https://pubmed.ncbi.nlm.nih.gov/3567292/
©2022 Thevaa Chandereng. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.