The mainstay of evidence development in medicine is the parallel-group randomized controlled trial (RCT), which generates estimates of treatment efficacy or effectiveness for the average person in the trial. In contrast, personalized trials (sometimes referred to as ‘single-person trials’ or ‘N-of-1 trials’) assess the comparative effectiveness of two or more treatments in a single individual. These single-subject, randomized crossover trials have been used in a scattershot fashion in medicine for over 40 years but have not been widely adopted. An important barrier is the paucity of strong evidence that personalized trials improve outcomes. However, the principal impediment may have less to do with proof of efficacy than with practical aspects of design and implementation. These include decisions about treatment regimen flexibility, blinding, and washout periods as well as organizational, clinician, and patient-level challenges. After reviewing the essential elements of personalized trials, this article addresses these speed bumps and fundamentally asks, 'Why have personalized trials not been more widely adopted, and how can they be made more readily deployable and useful?’ The article concludes by suggesting ways in which emerging technologies and approaches promise to overcome existing barriers and open promising vistas for the next generation of personalized-trial researchers and practitioners.
Keywords: personalized trial, randomized controlled trial (RCT), heterogeneity of treatment effects (HTE), individualized treatment effect (ITE), blinding, washout
Parallel-group randomized controlled trials (RCTs) have made an enormous contribution to health and health care. They randomly assign patients to two or more treatment arms; the comparisons are between groups. When properly designed and conducted, these trials provide unbiased estimates of the ‘average treatment effect’ for participants in the trial. Evidence generated via this approach is surely better than the ‘eminence-based medicine’ of prior eras. However, the typical patient in a trial is often surprisingly different from average, especially with respect to prognosis (Kent et al., 2010). For this reason, clinical researchers and statisticians have avidly sought methods for estimating the effects of different treatments on individuals—that is, to account for heterogeneity of treatment effects (Kravitz et al., 2004). One approach, ideal in some circumstances and entirely unworkable in others, is the personalized (N-of-1) trial.
Personalized trials are randomized crossover trials conducted in a single patient. Such trials are a subset of single-case designs, which “study intensively the process of change by taking many measures on the same individual subject over a period of time” (Follette, 2001, p. 14110). Single-case designs have played an important role inside and outside of medicine for many generations (Mirza et al., 2017). However, randomized crossover trials in an individual (i.e., personalized trials) have a shorter history (Guyatt et al., 1986, 1988; Larson et al., 1993).
Personalized trials chiefly aim to guide treatment for the individual. Their singular advantage is the ability to directly estimate the individual treatment effect (ITE): the difference (or ratio) of outcomes between one treatment and another in a given person. The ‘treatment’ and its comparator can be a drug, a dietary supplement, a short-acting procedure, a behavior, a placebo, or no treatment at all. By switching treatments in a defined sequence over time, an individual can compare outcomes while on alternative regimens, thus providing a direct estimate of how well a given treatment works for her.
To date, adoption of personalized trials has been modest. Gabler et al. (2011) found 108 personalized trials (or trial series) published between 1985 and 2011, while Punja and colleagues (2016) independently found 100 reports appearing between 1950 and 2013. In contrast, there have been nearly 400,000 RCTs (mostly of the parallel group type) reported in PubMed since 2000.
The relatively slow rate of uptake has tended to disappoint personalized trial proponents (Vohra, 2016). One explanation is that such trials have simply not delivered on their promise of improving clinical outcomes; they represent “another instance of a beautiful idea being vanquished by cruel and ugly evidence” (Mirza & Guyatt, 2018, p. 1379). However, others argue that the concept has not been sufficiently tested (Kravitz et al., 2019). In addition, qualitative research with patients and clinicians suggests that many have never heard of the approach, have little sense of how to implement such trials in the context of a busy practice, or are skeptical as to whether the putative benefits (e.g., enhanced patient engagement in care, potentially improved clinical decision-making) are worth the costs and burdens (Cheung et al., 2020; Kravitz et al., 2020; Kronish et al., 2017; Moise et al., 2018). A quote by a physician-participant in an interview study is particularly apt: “Well, I personally would be interested in that, but I think one of the biggest limitations . . . is time and time constraints” (Kravitz et al., 2009, p. 441).
Personalized trials may yet find their place in the clinical and wellness landscape for two reasons. First, new developments in biostatistics, health informatics, and information technology are helping to streamline and automate many personalized trial functions. These innovations allow people to design their own trials and more readily collect, organize, analyze, and interpret personalized trial data. In particular, mobile devices combined with a robust backend may partially obviate the need for personalized trial ‘services’ established at the organizational, state, or national level (Chalmers et al., 2019).
Second, inspired by investigators in the behavioral sciences and by the quantified self movement (Swan, 2013), an increasing number of personalized trials are being conducted among people seeking to mitigate symptoms or enhance wellness with or without the guidance of a licensed health professional. (In this context, personalized trials represent a rigorous extension of self-tracking, itself a growing trend with implications for both self-care and health enhancement. [Jin et al., 2022]). Along these lines, the personalized trial landscape can be conceptualized as a Cartesian plane with the two axes representing the underlying purpose (treatment versus health enhancement) and the need for clinical supervision (performed with professional guidance versus independently; Figure 1).
This article covers the essential elements of personalized trials, explores barriers to uptake and use, and discusses emerging technologies and approaches that may facilitate expanded use.
Regardless of where they fall within the landscape depicted in Figure 1, personalized trials have universal requirements. Some of these requirements are technical and guide the selection of subjects, health conditions, and interventions; the means by which trials are conducted; and the way data are analyzed and aggregated. Others are social and organizational; they bear on how participants are recruited, enrolled, and supported. Even when trials are conducted by individual patients/consumers acting independently of the health care system (bottom half of Figure 1), there is still a need for both technical support (e.g., in the form of ‘apps’) and social support (often taking the form of online discussion groups).
The technical requirements for personalized trials include criteria related to the population, health condition, treatment, and design and analysis of the trials themselves (Table 1). Two additional elements (i.e., blinding and washout) may also be necessary depending on the specific circumstances.
Technical Requirements Related to Population, Health Condition, and Treatment
Heterogeneity of treatment effects
Condition that is chronic, stable, and monitorable
Rapid onset/offset of treatments
Technical Requirements for Appropriate Design, Analysis, and Communication of Results
Randomized or balanced assignment
Systematic outcome assessment
Framework for analysis and feedback
Optional Elements That May Be Required in Selected Circumstances
Substantial heterogeneity of treatment effects (HTE). In qualitative terms, if HTE is small, then most patients respond to treatment in the same way, so one may then assume that average effects derived from parallel-group trials accurately signal what a given individual might expect. This would make personalization unnecessary. More quantitatively, HTE can be defined as the standard deviation of the individual treatment effects, which is proportional to the pooled standard deviation of the outcome, SD, and the correlation between the outcome of individuals receiving each of two treatments,
A health condition that is chronic, relatively stable, and monitorable with a validated patient-reported outcome measure (PROM) or biomarker. Acute conditions will tend to resolve (or progress) before personalized trials can be completed. Rapidly progressive or fatal conditions are likewise unsuitable. Acute severe coronavirus disease 2019 (COVID-19), for example, is a poor platform for such trials because the disease may kill the patient before multiple treatment crossovers can be accomplished. In contrast, chronic symptoms following COVID-19 infection (‘long COVID’) is an ideal target for personalized trials, as the typically prolonged course permits many treatment switches. Finally, the statistical reliability of single-patient, multiple-crossover trials increases the more often outcomes are assessed. This is why many personalized trials take measures daily or more often. The most common method for obtaining serial outcome measurements is through direct patient or proxy reports (i.e., surveys). Sometimes, a clinical measurement (e.g., blood pressure) or laboratory value (e.g., blood glucose) can be used as a proxy outcome. Increasingly, personalized trials have begun to incorporate outcomes data obtained during daily life via mobile devices (e.g., daily steps, sleep, social interactions).
Treatments that have rapid onset and modest carry-over effects. Because, in the sense we use the term, personalized trials require at least two treatment switches (e.g., BAAB or ABBA) and multiple outcome measurements, such studies can stretch on for some time and potentially try the patience of participants. The ideal study treatment will take effect quickly and dissipate rapidly. An excellent clinical example might be use of inhaled levodopa (Inbrija®) versus oral, immediate-release levodopa-carbidopa (Sinemet®) for ‘off-periods’ in Parkinson’s disease. Both of these agents take effect within minutes and wear off after a few hours. In contrast, given their extended biological half-life, bisphosphonates for osteoporosis would be a terrible candidate for personalized trials. Some personalized trial investigators have dealt with the problem of prolonged treatment effects by incorporating washout periods (i.e., sufficient time in between treatment switches for the initial treatment to wear off) or various analytic techniques that adjust for carryover (e.g., by downweighting outcome measurements obtained soon after a switch). We discuss the use of washouts in Section 2.1.3.
Randomized or balanced treatment assignment. In most conventional clinical trials, the unit of analysis is the individual participant. In personalized trials, the unit of analysis is a segment of time (i.e., hour, day, week, etc.). Put more clearly, in RCTs, people are randomized to treatments, but in personalized trials, treatments are randomized within people. Treatments must be allocated in a manner that minimizes bias, maximizes statistical information, and conveys credibility to participants and clinicians. This is usually achieved with an appropriate restricted randomization scheme. Unrestricted random assignment might result in sequences with poor credibility, validity, and efficiency such as AAAABBAA, which frontloads Treatment A and allocates 75% of the entire study period to this treatment. Therefore, many experts restrict random assignment so as to limit randomization to a subset of possible sequences with desirable statistical properties while conveying reasonable credibility to the end users. For example, a trial comparing two treatments with weekly switches lasting a total of four weeks could restrict the randomization to the following four allowable sequences: ABAB, ABBA, BAAB, and BABA. These sequences allocate half of the treatment segments to each treatment within each block of two consecutive time segments. The randomization can be restricted further to the following two allowable sequences: ABBA and BAAB. These sequences are more robust against the possibility of confounding with time trend than the sequences ABAB and BABA.
Systematic assessment and collection of outcomes. In personalized trials, systematic assessment of outcomes may well be the single most important design element. Two issues need consideration: (1) what data to collect and (2) how to collect them (Kravitz, 2014). For most chronic conditions, many outcomes are potentially relevant; they may be condition-specific (e.g., pain intensity in chronic low back pain, diarrhea frequency in inflammatory bowel disease) or generic (e.g., health-related quality of life). The ideal measure is reliable, valid, and—especially for trials where the primary aim is to inform clinical care of the current patient rather than produce generalizable evidence or influence regulatory decisions—closely matched to the patient’s priorities (when the focus is on generalizable evidence or regulatory approval, use of reproducible measures is essential). When such measures are unavailable off the shelf, patients and clinicians must design their own or enlist a hybrid approach, such as the Measure Yourself Medical Outcome Profile (MYMOP; Ishaque et al., 2019). Personalized trials can make use of the entire spectrum of data-collection modalities from surveys, diaries, medical records, and administrative data to newer technologies involving mobile devices and remote monitors.
A framework for statistical analysis and feedback for decision making. Once data are collected, the results need to be analyzed and presented to the relevant decision makers in an actionable form. Developers and users of personalized trials have three issues to consider: (1) should outcomes be combined, and how? (2) how should the data be presented? and (3) to what extent should various forms of prior knowledge be integrated into decision making? Separate measures retain clinical granularity, while composite measures distill complex information into fewer numbers or even a single number. Simple graphs are appealing to many patients but tend to ignore or downplay uncertainty. More complex graphs and tables might have allure for more sophisticated users, but could be hard for others to decipher and interpret. Some evidence suggests that combining simple graphs and verbal summary statements may have the widest reach (Whitney et al., 2018). Whether to customize the presentation, and how, is an important task for personalized trials, as well for the broader famework of personalized data science (see companion article in this issue, Duan et al. ). Finally, within a Bayesian framework, results of personalized trials are more robust when bolstered by external evidence, whether from other similar personalized trials or the clinical literature.
Blinding. This term generally refers to “keeping study participants, those involved with their management, and those collecting and analyzing clinical data unaware of the assigned treatment, so that they should not be influenced by that knowledge” (Day & Altman, 2000). Blinding of participants and clinicians in personalized trials can be challenging and is often unnecessary. Blinding is essential when there is a need to separate the biological activity of the treatment from nonspecific (placebo) effects. This is certainly the case in most parallel group drug and device trials as well as personalized trials conducted in series for the purpose of obtaining regulatory approval of a new therapeutic agent. However, in many personalized trials, participants are most interested in the overall effects of treatment, defined as the sum of specific and nonspecific effects. Therefore, blinding may be less important (and even counterproductive) in this context.
Washout. As noted above, a condition-treatment pair is ideally suited for the multiple-crossover approach of personalized trials when the condition is relatively stable (i.e., neither wildly fluctuating nor unrelentingly improving or deteriorating) and the treatment has a rapid onset and offset. However, many such condition-treatment pairs are suboptimal. When researchers are concerned that the effects of the treatment administered first may bleed over into the next observation period, their solution is often to introduce a washout period. Washouts may be ‘physical’ or ‘analytical’ (Hogben & Sim, 1953). In a physical washout, a period of time is permitted to elapse between treatments, and the interval depends on expected treatment duration. For pharmaceutical interventions, the washout interval would be an appropriate multiple of the elimination half-life. In addition to prolonging trial length, physical washouts introduce ethical problems, as patients are necessarily denied access to potentially effective treatment for the duration of the washout. In an analytical washout, treatments are administered sequentially without a break, but measurements are adjusted up or down (‘reweighted’) to account for what is known about the carryover and start-up effects, thus producing the equivalent of physical washout without unduly withholding treatments from patients. Analytic washouts cannot compensate for observation periods that are too short relative to the duration of action of the treatment.
In addition to these technical requirements, clinicians and clinical investigators interested in launching personalized trials need social and organizational support. Within health care settings, clinicians hoping to make personalized trials available to their patients must begin with a keen understanding of the indications, strengths, and limitations of the method; these trials are not for everyone. They should also be adequately committed to the process so as to not only convey their enthusiasm to patients but also weather the inevitable setbacks, delays, and ambiguities. Beyond their own personal commitment, clinician-investigators need support from organizational leaders and colleagues. While personalized trials may lower costs in the long run (Kravitz et al., 2008; Pereira et al., 2005; Scuffham et al., 2010), they can impose significant time demands and require ongoing investment in personnel and infrastructure. For example, Scuffham et al. (2008) estimated the fixed cost of personalized drug trials at AU$23,280 for each protocol; this included staff costs for protocol development, funding applications, ethics agreements, preparation of forms and questionnaires, database development, and design and preparation of medication packs. (A single protocol could serve as the framework for personalized trials conducted in multiple individuals.) The variable (i.e., per-patient) costs were estimated at roughly AU$600, which included recruitment, administration, data collection and analysis, feedback, and 12-month follow-up of outcomes. Given the relatively low marginal costs of enrolling each additional patient, successful personalized trial programs create economies of scale. Organizational leadership must step up to not only provide the initial investment but also support clinician champions in bringing along colleagues who recruit additional patients.
Outside of health care settings, personalized trials need participants, and participants need a platform that makes participation easy. Investigators, meanwhile, must identify a target population, develop a marketing strategy, and encourage enrollment through social networks. As an example, in a recently published personalized trial series marketed to the general public, a multidisciplinary team used social media and an interview on the Brian Lehrer Show (on NPR.org) to recruit participants interested in trying out one of several simple behavioral interventions for promoting psychological well-being (Kravitz et al., 2020). They created a website with training videos, provided participants with a mobile app for reporting daily outcomes during intervention and control periods, and returned results via a personal web link.
Whether studies are conducted within or outside of health care settings, research suggests that many patients are natural enthusiasts for self-tracking but do not necessarily appreciate the benefits of randomized (or balanced) switching between treatments, and they are not always prepared to interpret even simple numerical or graphical results (Whitney et al., 2018). Therefore, in designing personalized trials, investigators need to account for patient preferences (Moise et al., 2018) and information-processing styles (Gigerenzer et al., 2007).
Although personalized trials have many adherents and a few evangelists, implementation has been slow. As suggested earlier, a major reason is conflicting evidence; few RCTs have directly compared personalized trials to standard care, and most of those have produced marginally positive, equivocal, or unconvincing results. For example, in a study by Mahon et al. (1996), personalized trials succeeded in convincing a large proportion of asthma patients taking theophylline without benefit to discontinue the medication. In a study of arthritis patients by Pope et al. (2004), N-of-1 patients had slightly better outcomes at substantially higher cost. Further, in an RCT by Kravitz et al. (2018), chronic pain patients assigned to the N-of-1 group had slightly better pain interference and significantly enhanced medication-related shared decision-making. However, the primary outcome of pain interference was not statistically significant.
On the other hand, a substantial number of published case series support the feasibility and value of personalized trials for individual patients, and some argue that RCTs are an inappropriate testing ground for such trials because of the very nature of personalization (Vohra, 2016). As it stands, much of the ‘evidence’ supporting personalized trials derives from case series in which trial participation has been associated with (1) choosing a more personally effective treatment for long-term use, (2) choosing a safer or less-costly treatment for long-term use, or (3) continuing with an evidence-based treatment that allegedly causes side effects (Joy et al., 2014; Nikles et al., 2006, 2007; Yelland et al., 2007). The next generation of personalized trial researchers will further unravel the inherent heterogeneity of treatment effects (where personalized trials are the treatment) and identify which patients benefit and which do not.
Setting aside the question of ‘effectiveness’ of personalized trials for the average patient, what are the remaining barriers to their uptake? These may be broadly categorized as intrinsic or extrinsic. Intrinsic factors include elements of trial design that may or may not be essential but increase burden to investigator, clinician, or participant. Extrinsic factors include perceptions of benefit and cost as assessed by organizations, clinicians, and patients/participants.
The principal intrinsic (design) factor to consider is treatment regimen rigidity versus flexibility. In parallel group RCTs, treatment protocols are well-defined with limited opportunities for adjustment. For example, in cancer trials, a fixed-dose (i.e., mg/kg), two-drug chemotherapy regimen may be compared to a three-drug regimen with allowances for a 50% dose reduction in the event of certain side effects. Similarly, investigators interested in aggregating the results of N-of-1 series through meta-analysis will naturally prefer relatively rigid treatment regimens to simplify inferences about overall (average) treatment effects, HTE, and predictors of individual treatment effects. Although some regimen-related variation is manageable using network meta-analysis, the quest for generalizability will tend to favor uniform treatment regimens.
In contrast, the primary goal of personalized trials is to guide treatment for the individual. Since people differ in terms of their physiology, psychology, social determinants, and preferences, treatment regimens for comparison often need tailoring. For example, in the PREEMPT (Personalized Research for Monitoring Pain Treatment) Study, patients with chronic musculoskeletal pain were encouraged to make comparisons among any combination of eight treatment categories including acetaminophen, nonsteroidal anti-inflammatory drugs, short-acting opioids, and various non-pharmacological complementary and alternative treatments.
Other design factors (introduced earlier in this article as ‘optional elements’) are blinding and washout. Blinding may be essential when it is critical to exclude non–drug-related (nonspecific) benefits (i.e., placebo effect) or to investigate adverse effects (i.e., nocebo effect; Herrett et al., 2021). At least one rating scale has incorporated blinding as a criterion for personalized trial quality (Tate et al., 2013). However, blinding may be impossible in some settings (e.g., with most behavioral interventions) and unnecessary in others (e.g., when the patient or clinician is interested in the sum of specific and nonspecific effects). Indeed, absent the need to generalize to other people or populations, blinding may be undesirable because it would preclude accounting for the sum of specific and nonspecific treatment effects in an individual. In addition, blinding increases costs and decreases regimen flexibility.
Washout periods—introduced to guard against carryover effects—have their own limitations (Duan et al., 2013). Patients and clinicians may be dissatisfied with the withholding of active treatments during the washout. If the treatment has a slow onset, a washout period can increase the delay before clinical effects are realized and thereby stretch out the duration of the trial.
Personalized trials targeted to clinical populations (see Figure 1) require the support of organizations (e.g., governments, health systems, hospitals, clinics, or practices). With robust support from organizational leadership, investigators can recruit clinicians and patients; hire pharmacists, statisticians, and database managers; overcome inevitable administrative hurdles; and amass the necessary resources to implement trials efficiently and effectively. If the organization is resistant, personalized trial implementation is much more difficult.
A major barrier to the incorporation of personalized trials into routine practice is the ongoing debate over whether these trials represent ‘research’ or merely a more rigorous approach to routine clinical care. If personalized trials are research, then the usual requirements (i.e., institutional review board [IRB] approval, written informed consent, and third-party monitoring) all apply. If they are simply an upgrade to clinical care, then authority devolves to the clinician and patient through a process of shared decision-making and oversight through applicable licensing and credentialling bodies. As Punja and colleagues (2014, p. 15) have argued:
If the primary interest is to produce local knowledge to inform treatment decisions for individual patients, n-of-1 trials so conducted should be interpreted as clinical care, and in our view are not subject to the HHS protection of human subjects regulations. Alternatively, if the primary interest is to produce generalizable knowledge to inform treatment decisions for future patients, such n-of-1 trials should be interpreted as human subjects research and required to comply with the standards of such research.
Two sources of difficulty are worth noting. First, organizational leaders and ethics boards may not accept the premise that personalized trials are not research. Second, distinguishing between intent to produce local (i.e., individual patient) knowledge and generalizable (i.e., more broadly applicable) knowledge can be challenging. A widely held but erroneous belief is that intent to publish constitutes research (Office for Human Research Protections, 2021). However, disputed areas also present various issues. For example, what if the data from a patient undergoing a personalized trial to inform their own care will be aggregated with data from other patients undergoing similar trials for the purpose of assessing both average treatment effects and HTE? What if the individual’s data are used to inform the treatment of the next patient? What if some or all of the treatment options within a personalized trial are in common use for that indication but are not FDA-approved (i.e., they are used ‘off-label’)? Punja et al. (2014) provide a reasonable framework for sorting out some of these difficulties, but they have yet to be resolved.
These ethical and regulatory conundrums aside, organizational leaders simply may not see the value proposition in personalized trials. Given fixed and variable costs totaling US$1,000 or more per case, Pace et al. (2014) conclude that personalized trials will gain economic traction only when applied to clinical areas where treatment costs are high, serious side effects are prevalent, and infrastructure is adequate to support trials that are straightforward, efficient, and timely.
At the clinician level, the major barriers are both practical and relational (Kravitz et al., 2009). Practically speaking, many clinicians will conclude that personalized trials, at least for most patients, are simply not worth the time and effort. This conclusion is based partly on the judgment that therapeutic trials as used in customary practice are often ‘good enough’ and partly on the impression that many patients already struggle with adherence and self-monitoring and will ‘not do well’ with the extra demands entailed by multiple crossover trials. Of course, some of these concerns might be obviated by a more flexible approach to personalized trial design. From the relational perspective, some clinicians are fearful that the concept of trials within clinical settings will upend the nature of professionalism and the doctor–patient relationship:
It seems like it takes away the doctor's doctoring so that the doctor becomes this scientist. You come to see your doctor because you want their opinion, and [instead] the doctor’s response is: “Well, I don't really know. Let’s try these two things. I don't know which one you’re going to get [first] but let’s give it a go.” So I don't know how patients would respond to that. (Kravitz et al., 2009, p. 440)
These organization- and clinician-level barriers are less relevant in nonclinical settings, where personalized trials are offered to members of the public who are seeking to manage minor symptoms or enhance well-being on their own. These trials must appeal to potential participants but can, as long as certain ethical and legal shoals are avoided, safely bypass clinicians and institutions. The prototypical example is the quantified self (QS), which is a loosely organized affiliation of self-trackers and toolmakers who who share an interest in “self-knowledge through numbers” (Quantified Self, 2019). Recent health-related projects featured on the QS website include tracking blood oxygen on Mount Everest, mindfulness following meditation, home-monitored blood glucose and lipids in response to diet and exercise, and allergic symptoms in response to different grasses. QS encourages interested parties to carry out their own projects or to join ongoing ones. They attract individuals curious about this form of self-tracking largely through word-of-mouth. For more formal projects in which the goal is to enroll a group of individuals slated to participate in similarly structured personalized trials, more robust recruitment methods are needed (Kravitz et al., 2020).
The main message of this review is that increased uptake of personalized trials will require strategies that maximize real and perceived benefits and minimize costs and burdens to participants. Some of these strategies are listed in Table 2.
Strategies for Enhancing Benefits
Choose better comparators and maximize adherence to treatment
Use adaptive designs such as ‘play-the-winner’
Sharpen precision (with more, better, or more frequently obtained outcome measures)
Report results more quickly (using automated statistical analysis and reporting)
Report results more clearly to enhance comprehension
Make iteration (i.e., repeated trials in the same person, building on results of earlier trials) seamless
Strategies for Reducing Costs and Burdens
Reach consensus as to when personalized trials are ‘research’ and when they are clinical care (or quality improvement) and reduce unneeded institutional review board requirements
Streamline enrollment and consent procedures
Automate trial implementation (e.g., delivery of behavioral treatments or ‘nudges’)
Automate data collection (using mobile apps and sensors)
Consider economic ‘nudges’ or incentives
There are six ways to enhance benefits (see Table 2). First, whether treatments are pharmacologic or non-pharmacologic, they should be as well defined as possible, have rapid onset and offset, and be convenient to administer. Tracking adherence (using the least-burdensome methods possible) is also important in helping to bifurcate the analysis into ‘intention to treat’ and ‘as treated.’ (The ‘as treated’ analysis may be of interest to some patients—and many investigators—as an indicator of expected treatment effects when adherence is at or above a preselected target.)
Second, it may be valuable to extract information as early in the trial period as possible so the most promising treatment(s) can be assigned more frequently. For example, in a trial designed to run through four pairs of crossovers (e.g., ABBAABBA), if the initial two pairs show a consistent big advantage of treatment A, then the probability of assignment to AA (rather than AB or BA) in subsequent pairs could be adjusted upward. Applying such an adaptive design may reduce the chance of being stuck on an ‘inferior’ treatment, so as to improve patient outcomes even during the trial, and thereby be more attractive for patients.
Third, it will be desirable to enhance the precision of results by extending trial length, taking more measurements (daily, hourly, or even continuously), and/or adopting measurement instruments (e.g., psychometric scales, bodily sensors) with high reliability and validity. Psychometric scales always feature a tradeoff between reliability and convenience (longer scales are, all else equal, more reliable), but computerized adaptive testing built into mobile devices holds promise for achieving greater measurement stability with fewer items (Morris et al., 2017).
Fourth, results should be reported promptly, as enthusiasm for integrating the results may decay rapidly after trial completion. Rapid turnaround can be accomplished either through shared human resources (e.g., a statistical team that is available most days of the week to perform analyses and return results with minimal delay) or automated analysis using fixed algorithms or machine learning.
Fifth, no one should assume that results will be quickly, accurately, and meaningfully interpreted by trial participants; statistical illiteracy is a widespread problem (Gigerenzer et al., 2007). Beyond that, in one qualitative study, the majority of personalized trial participants preferred simple displays (such as bar charts) that do not represent uncertainty, while a substantial minority (23%) preferred more comprehensive displays with error bands for the margin of error as the “most helpful for decision-making” (Whitney et al., 2018). Therefore, we believe it is important for personalized trials to provide flexible options for delivering the results of individual trials, allowing end users to choose their preferred format, whether it be simple or comprehensive; graphical, tabular, or textual; and with or without representation of uncertainty. A one-size-fits-all approach, providing everyone with the same comprehensive display, is unlikely to satisfy many. But providing everyone with simple bar charts might be equally dismaying to the roughly one quarter of end users who are comfortable with representations of probability and uncertainty.
Lack of flexibility (personalization) of results reporting could be one reason that the influence of personalized trial results on patients’ subsequent treatment preferences is relatively weak (Kravitz et al., 2021). Furthermore, in the context of personalized trials, the clinical significance of small differences in means has not been established for most quality-of-life measures (Jaeschke et al., 1991). Although standards for visualization and statistical analysis have been proposed (Kratochwill et al., 2010; Kravitz, 2014), much more work on identifying best practices for communicating results to users is needed (see the companion article in this issue, Duan et al. , for further discussions, both for personalized trials and for the broader framework on personalized data science).
Finally, investigators must own up to the fact that, for many patients, personalized trials offer a pro tem solution. New treatments come on the market, clinical conditions evolve, comorbidities develop, and patient preferences change. New questions arise about combinations of treatments heretofore evaluated singly. Therefore, to maximize the benefits delivered, trial platforms should be flexible enough to allow for ongoing (iterative) comparisons, perhaps using information already acquired as Bayesian priors.
As for reducing costs and burdens of personalized trials, we have identified five targets (see Table 2). Any trial initiated by clinicians or investigators at an academic institution or health system will require oversight by an IRB. If oversight is conducted with a light touch (particularly for trials that fall into the category of clinical care or quality improvement), implementation can proceed apace; if heavy handed, long delays and high burden can be expected. Several groups have proposed classification schemes or algorithms for identifying trials requiring greater scrutiny (Punja et al., 2014; Stunnenberg et al., 2020).
Another way to reduce costs and barriers is to make it easier for individuals to find relevant trials (or to design their own), complete enrollment procedures, and provide meaningful informed consent. A number of electronic platforms, many created for use on mobile devices, have been used or are in development (Barr et al., 2015; Bobe et al., 2020; Konigorski et al., 2020; Kravitz et al., 2020). As Davidson et al. (2018) point out, however,
For clinicians interested in embedding N-of-1 trials in their clinical practice, a personalized trial platform needs to be developed that allows users to customize trial designs according to the use case. A shared service that delivers custom-built trial prototypes, uses a dedicated pharmacy, and facilitates data collection and analyses might best reduce logistical and cost barriers to widespread implementation. Over time, such infrastructure can foster the development of successful supporting services and mobile health applications that both facilitate N-of-1 trials and reduce technical barriers and implementation costs.
A third target is to automate trial implementation by streamlining delivery of the intervention(s). For example, some groups have arranged to mail personalized trial participants their study drugs (Nikles et al., 2006; Pace et al. 2014). Others have created electronic prompts and instructional videos to deliver behavioral interventions (Kravitz et al., 2020).
A fourth approach to minimizing burdens is to make data collection easier. While more measurements are generally more psychometrically reliable than fewer, patient tolerance for frequent measures has its limits (Lee et al., 2020). Therefore, investigators have viewed with interest the prospect of sampling outcomes using mobile devices or special sensors. These approaches will undoubtedly gain credence as new technologies evolve, but they will also raise important privacy concerns.
Finally, burdens can be offset, if not eliminated, by providing potential participants with economic incentives. The investment could be worthwhile, especially if personalized trials are used as a prelude to authorization of expensive medications (Kravitz et al., 2008).
Personalized trials remain a promising strategy for individualizing care under conditions of increased therapeutic precision. They have focused applicability within health and medicine and, though not for everyone, they have already demonstrated broad appeal within certain populations. However, fulfilling their potential will require new approaches to maximizing benefits and minimizing burdens. Advances in biostatistics and data science, information technology, and behavioral economics hold promise for delivering personalized trials more efficiently, thereby making this ‘non-omical’ form of individualized precision medicine available to more people (Kravitz, 2014).
This project was supported in part by by the National Center for Advancing Translational Sciences, National Institutes of Health, through grant number UL1 TR001860 as well as grants R01LM012836 from the National Library of Medicine of the National Institutes of Health and P30AG063786 from the National Institute on Aging of the National Institutes of Health. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. The views expressed in this paper are those of the authors and do not represent the views of the National Institutes of Health, the U.S. Department of Health and Human Services, or any other government entity.
Barr, C., Marois, M., Sim, I., Schmid, C. H., Wilsey, B., Ward, D., Duan, N., Hays, R. D., Selsky, J., Servadio, J., Schwartz, M., Dsouza, C., Dhammi, N., Holt, Z., Baquero, V., MacDonald, S., Jerant, A., & Sprinkle, R., & Kravitz, R. L. (2015). The PREEMPT study—Evaluating smartphone-assisted N-of-1 trials in patients with chronic pain: Study protocol for a randomized controlled trial. Trials, 16(1), Article 67. https://www.doi.org/10.1186/s13063-015-0590-8
Bobe, J. R., De Freitas, J. K., & Glicksberg, B. S. (2020). Exploring the potential for collaborative use of an app-based platform for N-of-1 trials among healthcare professionals that treat patients with insomnia. Frontiers in Psychiatry, 11, Article 530995. https://doi.org/10.3389/fpsyt.2020.530995
Chalmers, I., Smeeth, L., & Goldacre, B. (2019). Personalised medicine using N-of-1 Trials: Overcoming barriers to delivery. Healthcare, 7(4), Article 134. https://doi.org/10.3390/healthcare7040134
Cheung, Y. K., Wood, D., Zhang, K., Ridenour, T. A., Derby, L., St Onge, T., Duan, N., Duer-Hefele, J. D., Davidson, K. W., Kronish, I., & Moise, N. (2020). Personal preferences for personalised trials among patients with chronic diseases: An empirical Bayesian analysis of a conjoint survey. BMJ Open, 10(6), Article e036056. https://doi.org/10.1136/bmjopen-2019-036056
Davidson, K. W., Cheung, Y. K., McGinn, T., & Wang, Y. C. (2018, December 10). Expanding the role of N-of-1 trials in the precision medicine era: Action priorities and practical considerations. NAM Perspectives. https://nam.edu/expanding-the-role-of-n-of-1-trials-in-the-precision-medicine-era-action-priorities-and-practical-considerations/
Day, S. J., & Altman, D. G. (2000). Blinding in clinical trials and other studies. BMJ, 321(7259), Article 504. https://doi.org/10.1136/bmj.321.7259.504
Duan, N., Kravitz, R. L., & Schmid, C. H. (2013). Single-patient (N-of-1) trials: A pragmatic clinical decision methodology for patient-centered comparative effectiveness research. Journal of Clinical Epidemiology, 66(Suppl. 8), S21–S28. https://www.doi.org/10.1016/j.jclinepi.2013.04.006
Duan, N., Norman, D., Schmid, C. H., Sim, I., & Kravitz, R. L. (2022). Personalized data science and personalized (N-of-1) trials: Promising paradigms for individualized health care. Harvard Data Science Review, (Special Issue 3). https://doi.org/10.1162/99608f92.8439a336
Follette, W. C. (2001). Single-case experimental designs in clinical settings. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social & behavioral sciences (pp. 14110–14116). Pergamon.
Gabler, N. B., Duan, N., Vohra, S., & Kravitz, R. L. (2011). N-of-1 trials in the medical literature: A systematic review. Medical Care, 49(8), 761–768. https://www.doi.org/10.1097/MLR.0b013e318215d90d
Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M., & Woloshin, S. (2007). Helping doctors and patients make sense of health statistics. Psychological Science in the Public Interest, 8(2), 53–96. https://doi.org/10.1111/j.1539-6053.2008.00033.x
Guyatt, G., Sackett, D., Adachi, J., Roberts, R., Chong, J., Rosenbloom, D., & Keller, J. (1988). A clinician's guide for conducting randomized trials in individual patients. CMAJ, 139(6), 497–503.
Guyatt, G., Sackett, D., Taylor, D. W., Chong, J., Roberts, R., & Pugsley, S. (1986). Determining optimal therapy—Randomized trials in individual patients. New England Journal of Medicine, 314(14), 889–892. https://www.doi.org/10.1056/NEJM198604033141406
Herrett, E., Williamson, E., Brack, K., Beaumont, D., Perkins, A., Thayne, A., Shakur-Still, H., Roberts, I., Prowse, D., Beaumont, D., Jamal, Z., Goldacre, B., van Staa, T., MacDonald, T. M., Armitage, J., Moore, M., Hoffman, M., & Smeeth, L. (2021). Statin treatment and muscle symptoms: Series of randomised, placebo controlled N-of-1 trials. BMJ, 372, Article n135. https://doi.org/10.1136/bmj.n135
Hogben, L., & Sim, M. (1953). The self-controlled and self-recorded clinical trial for low-grade morbidity. British Journal of Preventive & Social Medicine, 7(4), 163–179. https://doi.org/10.1136/jech.7.4.163
Ishaque, S., Johnson, J. A., & Vohra, S. (2019). Individualized health-related quality of life instrument Measure Yourself Medical Outcome Profile (MYMOP) and its adaptations: A critical appraisal. Quality of Life Research, 28(4), 879–893. https://doi.org/10.1007/s11136-018-2046-6
Jaeschke, R., Guyatt, G. H., Keller, J., & Singer, J. (1991). Interpreting changes in quality-of-life score in N of 1 randomized trials. Controlled Clinical Trials, 12(4), S226–S233. https://doi.org/10.1016/s0197-2456(05)80026-1
Jin, D., Halvari, H., Maehle, N., & Olafsen, A. H. (2020). Self-tracking behaviour in physical activity: A systematic review of drivers and outcomes of fitness tracking. Behaviour & Information Technology, 41(2), 242–261. https://doi.org/10.1080/0144929X.2020.1801840
Joy, T. R., Monjed, A., Zou, G. Y., Hegele, R. A., McDonald, C. G., & Mahon, J. L. (2014). N-of-1 (single-patient) trials for statin-related myalgia. Annals of Internal Medicine, 160(5), 301–310. https://doi.org/10.7326/m13-1921
Kent, D. M., Rothwell, P. M., Ioannidis, J. P., Altman, D. G., & Hayward, R. A. (2010). Assessing and reporting heterogeneity in treatment effects in clinical trials: A proposal. Trials, 11(1), Article 85. https://doi.org/10.1186/1745-6215-11-85
Konigorski, S., Wernicke, S., Slosarek, T., Zenner, A. M., Strelow, N., Ruether, F. D., Henschel, F., Manaswini, M., Pottbäcker, F., Edelman, J. A., Owoyele, B., Danieletto, M., Golden, E., Zweig, M., Nadkarni, G., & Böttinger, E. (2020). StudyU: A platform for designing and conducting innovative digital N-of-1 trials. arXiv. https://doi.org/10.48550/arXiv.2012.14201
Kratochwill, T. R., Hitchcock, J., Horner, R., Levin, J. R., Odom, S., Rindskopf, D., & Shadish, W. (2010). Single-case designs technical documentation. What Works Clearinghouse.
Kravitz, R. L., Duan, N., Eslick, I., Gabler, N., Kaplan, H. C., Larson, E., Pace, W. D., Schmid, C. H., Sim, I., & Vohra, S. (2014). Design and implementation of N-of-1 trials: A user’s guide. Agency for Healthcare Research and Quality. https://effectivehealthcare.ahrq.gov/products/n-1-trials/research-2014-5
Kravitz, R. L. (2014). Personalized medicine without the “omics.” Journal of General Internal Medicine, 29(4), 551–551.
Kravitz, R. L., Aguilera, A., Chen, E. J., Choi, Y. K., Hekler, E., Karr, C., Kim, K. K., Phatak, S., Sarkar, S., Schueller, S. M., & Sim, I. (2020). Feasibility, acceptability, and influence of mHealth-supported N-of-1 trials for enhanced cognitive and emotional well-being in US volunteers. Frontiers in Public Health, 8, 260. https://doi.org/10.3389/fpubh.2020.00260
Kravitz, R. L., Duan, N., & Braslow, J. (2004). Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Quarterly, 82(4), 661–687. https://www.doi.org/10.1111/j.0887-378X.2004.00327.x
Kravitz, R. L., Duan, N., & Braslow, J. (2006). Erratum: Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. (Milbank Quarterly  82, 4 [661–687]). Milbank Quarterly, 84(4), 759–760. https://doi.org/10.1111/j.1468-0009.2006.00473.x
Kravitz, R. L., Duan, N., & White, R. H. (2008). N-of-1 trials of expensive biological therapies: A third way? Archives of Internal Medicine, 168(10), 1030–1033. https://www.doi.org/10.1001/archinte.168.10.1030
Kravitz, R. L., Paterniti, D. A., Hay, M. C., Subramanian, S., Dean, D. E., Weisner, T., Vohra, S., & Duan, N. (2009). Marketing therapeutic precision: Potential facilitators and barriers to adoption of N-of-1 trials. Contemporary Clinical Trials, 30(5), 436–445. https://www.doi.org/10.1016/j.cct.2009.04.001
Kravitz, R. L., Schmid, C. H., Marois, M., Wilsey, B., Ward, D., Hays, R. D., Duan, N., Wang, Y., MacDonald, S., Jerant, A., Servadio, J. L., Haddad, D., & Sim, I. (2018). Effect of mobile device-supported single-patient multi-crossover trials on treatment of chronic musculoskeletal pain: A randomized clinical trial. JAMA Internal Medicine, 178(10), 1368–1377. https://www.doi.org/10.1001/jamainternmed.2018.3981
Kravitz, R. L., Schmid, C. H., & Sim, I. (2019). Finding benefit in N-of-1 trials—Reply. JAMA Internal Medicine, 179(3), 455–455. https://doi.org/10.1001/jamainternmed.2018.8330
Kravitz, R. L., Marois, M., Sim, I., Ward, D., Kanekar, S. S., Yu, A., Dounias, P., Yang, J., Wang, Y., Schmid, C. H. (2021). Chronic pain treatment preferences change following participation in N-of-1 trials, but not always in the expected direction. Journal of Clinical Epidemiology, 139, 167–176. https://doi.org/10.1016/j.jclinepi.2021.08.007
Kronish, I. M., Alcántara, C., Duer-Hefele, J., Onge, T. S., Davidson, K. W., Carter, E. J., Medina, V., Cohn, E., & Moise, N. (2017). Patients and primary care providers identify opportunities for personalized (N-of-1) trials in the mobile health era. Journal of Clinical Epidemiology, 89, 236–237. https://doi.org/10.1016/j.jclinepi.2017.06.008
Larson, E. B., Ellsworth, A. J., & Oas, J. (1993). Randomized clinical trials in single patients during a 2-year period. JAMA, 270(22), 2708–2712.
Lee, R. R., Shoop-Worrall, S., Rashid, A., Thomson, W., & Cordingley, L. (2020). “Asking too much?”: Randomized N-of-1 trial exploring patient preferences and measurement reactivity to frequent use of remote multidimensional pain assessments in children and young people with juvenile idiopathic arthritis. Journal of Medical Internet Research, 22(1), Article e14503. https://doi.org/10.2196/14503
Mahon, J., Laupacis, A., Donner, A., & Wood, T. (1996). Randomised study of n of 1 trials versus standard practice. BMJ, 312(7038), 1069–1074. https://doi.org/10.1136/bmj.312.7038.1069
Mirza, R. D., Punja, S., Vohra, S., & Guyatt, G. (2017). The history and development of N-of-1 trials. Journal of the Royal Society of Medicine, 110(8), 330–340. https://doi.org/10.1177/0141076817721131
Mirza, R. D., & Guyatt, G. H. (2018). A randomized clinical trial of N-of-1 trials—Tribulations of a trial. JAMA Internal Medicine, 178(10), 1378–1379. https://doi.org/10.1001/jamainternmed.2018.3979
Moise, N., Wood, D., Cheung, Y. K. K., Duan, N., Onge, T. S., Duer-Hefele, J., Pu, T., Davidson, K. W., & Kronish, I. M. (2018). Patient preferences for personalized (N-of-1) trials: A conjoint analysis. Journal of Clinical Epidemiology, 102, 12–22. https://doi.org/10.1016/j.jclinepi.2018.05.020
Morris, S., Bass, M., Lee, M., & Neapolitan, R. E. (2017). Advancing the efficiency and efficacy of patient reported outcomes with multivariate computer adaptive testing. Journal of the American Medical Informatics Association, 24(5), 897–902. https://doi.org/10.1093/jamia/ocx003
Nikles, C. J., Mitchell, G. K., Del Mar, C. B., Clavarino, A., & McNairn, N. (2006). An N-of-1 trial service in clinical practice: Testing the effectiveness of stimulants for attention-deficit/hyperactivity disorder. Pediatrics, 117(6), 2040–2046. https://www.doi.org/10.1542/peds.2005-1328
Nikles, C. J., Mitchell, G. K., Del Mar, C. B., McNairn, N., & Clavarino, A. (2007). Long-term changes in management following N-of-1 trials of stimulants in attention-deficit/hyperactivity disorder. European Journal of Clinical Pharmacology, 63(11), 985–989. https://www.doi.org/10.1007/s00228-007-0361-x
Office for Human Research Protections. (2021). Quality improvement activities frequently asked questions. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/quality-improvement-activities/index.html
Pace, W., Larson, E. B., Staton, E. W., & the DEcIDE Methods Center N-of-1 Guidance Panel. (2014). Financing and economics of conducting N-of-1 trials. In N. Duan & R. L. Kravitz and the DEcIDE Methods Center N-of-1 Guidance Panel (Eds.), Design and implementation of N-of-1 trials: A user’s guide (No. 13(14)-EHC122-EF, pp. 23–32). Agency for Healthcare Research and Quality. https://effectivehealthcare.ahrq.gov/products/n-1-trials/research-2014-2
Pereira, J. A., Holbrook, A. M., Dolovich, L., Goldsmith, C., Thabane, L., Douketis, J. D., Crowther, M., Bates, S. M., & Ginsberg, J. S. (2005). Are brand-name and generic warfarin interchangeable? Multiple N-of-1 randomized, crossover trials. Annals of Pharmacotherapy, 39(7–8), 1188–1193. https://doi.org/10.1345/aph.1g003
Pope, J. E., Prashker, M., & Anderson, J. (2004). The efficacy and cost effectiveness of N of 1 studies with diclofenac compared to standard treatment with nonsteroidal antiinflammatory drugs in osteoarthritis. The Journal of Rheumatology, 31(1), 140–149. https://www.jrheum.org/content/jrheum/31/1/140.full.pdf
Punja, S., Eslick, I., Duan, N., Vohra, S., & the DEcIDE Methods Center N-of-1 Guidance Panel. (2014). An ethical framework for N-of-1 trials: Clinical care, quality improvement, or human subjects research? In N. Duan & R. L. Kravitz and the DEcIDE Methods Center N-of-1 Guidance Panel (Eds.), Design and implementation of N-of-1 trials: A user’s guide (No. 13(14)-EHC122-EF, pp. 13–22). Agency for Healthcare Research and Quality.
Punja, S., Xu, D., Schmid, C. H., Hartling, L., Urichuk, L., Nikles, C. J., & Vohra, S. (2016). N-of-1 trials can be aggregated to generate group mean treatment effects: A systematic review and meta-analysis. Journal of Clinical Epidemiology, 76, 65–75. https://doi.org/10.1016/j.jclinepi.2016.03.026
Quantified Self. (2019, Mar 21), What is quantified self? https://quantifiedself.com/about/what-is-quantified-self
Scuffham, P. A., Nikles, J., Mitchell, G. K., Yelland, M. J., Vine, N., Poulos, C. J., Pillans, P. I., Bashford, G., Del Mar, C., Schluter, P. J., & Glasziou, P. (2010). Using N-of-1 trials to improve patient management and save costs. Journal of General Internal Medicine, 25(9), 906–913. https://doi.org/10.1007/s11606-010-1352-7
Scuffham, P. A., Yelland, M. J., Nikles, J., Pietrzak, E., & Wilkinson, D. (2008). Are N-of-1 trials an economically viable option to improve access to selected high cost medications? The Australian experience. Value in Health, 11(1), 97–109. https://doi.org/10.1111/j.1524-4733.2007.00218.x
Stunnenberg, B. C., Deinum, J., Nijenhuis, T., Huysmans, F., van der Wilt, G. J., van Engelen, B. G., & van Agt, F. (2020). N-of-1 trials: Evidence-based clinical care or medical research that requires IRB approval? A practical flowchart based on an ethical framework. Healthcare, 8(1), Article 49. https://doi.org/10.3390/healthcare8010049
Swan, M. (2013). The quantified self: Fundamental disruption in big data science and biological discovery. Big Data, 1(2), 85–99. https://doi.org/10.1089/big.2012.0002
Tate, R. L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., Togher, L., & McDonald, S. (2013). Revision of a method quality rating scale for single-case experimental designs and N-of-1 trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychological Rehabilitation, 23(5), 619–638. https://doi.org/10.1080/09602011.2013.824383
Vohra, S. (2016). N-of-1 trials to enhance patient outcomes: Identifying effective therapies and reducing harms, one patient at a time. Journal of Clinical Epidemiology, 76, 6–8. https://doi.org/10.1016/j.jclinepi.2016.03.028
Whitney, R. L., Ward, D. H., Marois, M. T., Schmid, C. H., Sim, I., & Kravitz, R. L. (2018). Patient perceptions of their own data in mhealth technology-enabled N-of-1 trials for chronic pain: Qualitative study. JMIR Mhealth Uhealth, 6(10), Article e10291. https://www.doi.org/10.2196/10291
Yelland, M., Nikles, C., McNairn, N., Del Mar, C., Schluter, P., & Brown, R. M. (2007). Celecoxib compared with sustained-release paracetamol for osteoarthritis: A series of N-of-1 trials. Rheumatology, 46(1), 135–140. https://doi.org/10.1093/rheumatology/kel195
©2022 Richard L. Kravitz and Naihua Duan. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.