The Importance of Being Causal

Causal inference is the study of how actions, interventions, or treatments affect outcomes of interest. The methods that have received the lion’s share of attention in the data science literature for establishing causation are variations of randomized experiments. Unfortunately, randomized experiments are not always feasible for a variety of reasons, such as an inability to fully control the treatment assignment, high cost, and potential negative impacts. In such settings, statisticians and econometricians have developed methods for extracting causal estimates from observational (i.e., nonexperimental) data. Data scientists’ adoption of observational study methods for causal inference, however, has been rather slow and concentrated on a few specific applications. In this article, we attempt to catalyze interest in this area by providing case studies of how data scientists used observational studies to deliver valuable insights at LinkedIn. These case studies employ a variety of methods, and we highlight some themes and practical considerations. Drawing on our learnings, we then explain how firms can develop an organizational culture that embraces causal inference by investing in three key components: education, automation, and certification.


Introduction
The increasing abundance of data has enabled data scientists to uncover knowledge and insights that deliver a competitive advantage to firms in the private sector and agencies in the public sector (Hagiu & Wright, 2020).
Data scientists extract value from data by identifying actionable and impactful opportunities, providing business ecosystem insights, and measuring the effect of innovations.While the actionability of an opportunity depends on organizational factors, its impactfulness can be forecasted through careful analysis, known as opportunity sizing, and a deep understanding of the ecosystem.Ecosystem insights come in many different flavors, but the objective is typically to define business metrics, develop new hypotheses, and gather data for future opportunity-sizing endeavors.Assessing the impact of innovations requires a retrospective analysis of how the project changed the relevant outcome (or business metric).In this article, we explain how LinkedIn's data science organization has expanded its ability to produce these three types of value-adding activities by investing in observational causal inference.
Causal inference enables the discovery of key insights through the study of how actions, interventions, or treatments (e.g., changing the color of a button or the email subject line) affect outcomes of interest (e.g., clickthrough rate, email-opening rate, or subsequent engagement; see Angrist & Pischke, 2009;Imbens & Rubin, 2015;Pearl & Mackenzie, 2018;and Rosenbaum, 2017, for comprehensive reviews).Over the past decade, most technology firms and a growing number of conventional firms have greatly expanded their experimentation capabilities to measure the impact of product innovations (Thomke, 2020).In an experiment, the ability to randomize treatment assignment ensures that observed differences in outcomes between treatments are due to the intervention (Kohavi et al., 2020).Unfortunately, randomized experiments are not always feasible, as is often the case when it comes to opportunity sizing, understanding ecosystem insights, and measuring the impact of uncontrolled releases. 1In these examples, directly attributing changes in the outcome to the treatment may lead to biased estimates as the differences may be due to a third factor (known as a confounder) that impacts both the treatment selection and the outcome.In these cases, we suggest an alternative approach that extracts causal claims directly from observational data; this practice is called observational causal inference (Rubin, 1974).
While observational studies greatly expand the possibilities for causal inference, there are two extremes to avoid.First, data should not be used to retrospectively justify decisions, as this builds false confidence and delivers little value.Second, a lack of observational data should not impede innovation, as some ideas may be so radical that existing data cannot fully capture the size of the opportunity.The ideal situation is somewhere between the two extremes, where data guides innovation without stifling it.For a firm with an established culture of rigorous experimentation, investments in well-designed observational studies extend its ability to uncover strategic opportunities, inform business intuition, and allocate resources optimally.It is important to remember that observational studies cannot replace experimentation; rather, they enhance it.While observational data does not provide perfect information about treatment effects, observational causal studies, when properly applied, make the best use of the available data to improve decision-making ability.
Methods for observational causal inference are not new to statisticians and econometricians, but its translation from research to industry applications remains challenging.To this end, we provide four case studies of how LinkedIn data scientists used observational studies to impact the firm's strategy.Along the way, we draw out themes, highlight critical considerations, and touch upon methods that every data scientist should know, with references for the eager reader.We then discuss how, by investing in education, automation, and certification, firms can develop a culture that embraces causal inference from observational studies.The goal of this article is to help data scientists and business leaders understand how observational causal studies can be applied to improve business and to catalyze firms to invest more in developing the infrastructure and culture that embrace this.
Example 1: Description vs. Prediction vs. Causal Inference.Most data scientists are familiar with prediction tasks, where outcomes are predicted from a set of features.This is fundamentally different from causal inference, which requires an understanding of how interventions will impact an outcome, rather than predicting in a constant state of the world (Hernán et al., 2019).Following we provide some examples of the different types of question data scientists have to answer (Leek & Peng, 2015).

Example Descriptive.
What types of users have a large attrition (i.e., churn) rate?
Example Prediction.Can we predict who is likely to churn?
Example Causal.How do we reduce the likelihood of a user churning?

LinkedIn Case Studies
Each of our four LinkedIn case studies follows these four steps: first, we describe the business context and argue why an observational causal study is necessary; second, we provide the naive estimation strategy; third, we explain the choice of causal method; and finally, we share insights from the analysis.All effect estimates in the case studies are scaled for confidentiality.About LinkedIn.LinkedIn's vision is to create economic opportunity for every member of the global workforce.Through its website and mobile application, members can explore jobs, build meaningful relationships, and learn about opportunities to help advance their careers.LinkedIn offers enterprise applications that pair with this member ecosystem, to deliver value to both members and customers.
LinkedIn Talent Solutions provides recruiting tools to help companies become more successful at talent acquisition, including promoting their company brand and engaging the right pools of qualified candidates.LinkedIn Marketing Solutions and LinkedIn Sales Solutions help customers engage a community of professionals in multiple ways (text advertising, sponsored messaging, lead generation, etc.) to improve brand awareness and build business relationships.LinkedIn Learning helps companies develop talent and helps employees keep vital business skills current with engaging online training and courses.
When to Use Observational Studies.Before starting an analysis, we determine if our question is indeed causal as opposed to descriptive or predictive; see Example 1 for a comparison.If the question is causal, then we decide whether we can answer it through an experiment; if we cannot, then we rely on an observational study.Broadly speaking, we use observational studies in three types of analysis: 1. Opportunity Sizing.Determines if any treatments out of a set of candidates are good business opportunities.
Typically, the treatments are not fully implemented, and so we cannot use experiments to assess their feasibility.
2. Ecosystem Insights.Derived from analyzing all aspects of natural firm operations.Although it is possible to learn these through experiments, in practice, it is too costly and time-consuming to do so.Instead, we leverage observational causal inference methods for extracting practical insights.
3. Uncontrolled Rollout.Whenever the release process of innovations is outside of the control of the data scientists.In this case, an observational study is the best method to estimate the causal impact.
For these studies, suitable observational data must be available to test the hypothesis (e.g., the treatment must come before the outcome).The input data structure generally falls into one of four categories: cross-sectional, instrumental variable, panel, and interrupted time-series.Each category supports multiple methods to extract plausible causal estimates, each with its own set of assumptions, strengths, and limitations.One consistent thread across all approaches is that the results depend on the validity of the underlying assumptions.Therefore, data scientists should carefully apply diagnostic tools to assess violations of the underlying assumptions, and sensitivity analyses to measure the robustness of the findings.

Case 1. Opportunity Sizing: Job Postings
Talent Solutions is a suite of tools at LinkedIn that enables employers to attract job seekers by posting jobs on the site.An ideal job posting has a compelling description that provides all of the relevant information required by a job seeker to determine if they should apply.One way to ensure that a job seeker has all of the relevant information is to make some text fields mandatory at the time a job is posted.However, increasing the number of required fields also increases the complexity of posting a job, which can deter or delay new listings.
Answering causal questions about the value of each job attribute can help product designers decide which fields should be required.
In this study, the units are job postings; the treatment is having versus not having an attribute, such as job title, function, industry, location, or employment type.Each attribute corresponds to a different treatment.Of the key business metrics we track, the most important is the view-to-apply rate, which is the probability a member applies to a job after seeing it.For simplicity, we focus our discussion on assessing the benefit of having a job title.
We can obtain a naive estimate by comparing the view-to-apply rate between the jobs that have a job title and those without one.Our naive approach yields an estimate of an approximately 10% difference in the view-toapply rate.This, however, is likely an overestimate of the effect because of potential confounders.For example, listings from well-known companies typically have a higher view-to-apply rate and are more likely to post jobs with titles.The 10% difference could then be due to the effect of company popularity as opposed to the job title.
Whether or not a job posting has an attribute is determined at creation time, so each unit can only take the treatment once.This one-time treatment is a characteristic of cross-sectional studies.After the job is posted (i.e., the treatment is assigned), we monitor the view-to-apply rate for each posting.The temporal ordering of the intervention occurring before the outcome is vital in ensuring that it is plausible for the treatment to impact the outcome.
Cross-sectional study is the most prevalent class of observational studies.It usually provides a good starting point for answering many causal questions.Table 1 shows an example of the typical data structure of cross-sectional studies.The data structure for a cross-sectional study includes a treatment label collected for each unit, an outcome collected after the treatment, and covariates collected before the treatment, as shown in Figure 1.
Table 1.Cross-sectional study data structure.
Popular strategies for analyzing cross-sectional observational studies follow a two-step process: design phase and analysis phase (Stuart, 2010).In the design phase, data scientists try to remove any systematic differences in the observed confounders between the treatment groups.A successful design phase creates a data set consisting of units that are alike in every measurable way, except for the treatment they took.
Since the vital property of randomized experiments that allows us to identify the causal effect with the assumption that the only difference between the two groups is the treatment that they took, we can think  of the design phase as trying to approximate a randomized experiment.See Rosenbaum (2010) for a book-length review of cross-sectional observational study methods; common methods include matchingbased methods (Iacus et al., 2009;Rubin, 1973Rubin, , 2006;;Stuart & Green, 2008) and weighting-based methods (Bang & Robins, 2005;Czajka et al., 1992).
To obtain a more accurate estimate, we matched treated and control job postings based on covariates (e.g., Table 1) that correlate with both the outcome value and treatment assignment.We used hundreds of categorical covariates, and even after removing outliers, our samples consisted of 16 million job postings.After matching, the treatment effect estimate reduced to 2.4%, a 76% reduction.For other treatments, such as having the job attributes function, skill requirements, and location, we found a reduction of opportunity size on the order of 38% to 56%.
Based on our opportunity-sizing analysis, we decided that job posting title and skills were the most impactful fields to include in a job listing.Our main recommendation was that product features should encourage job postings to contain these two attributes.Through this analysis, we also learned the importance of highlighting to job seekers the relevant skills they possessed for a particular job; this led to both user interface changes as well as an update in our recommendation algorithm.

Case 2. Business Ecosystem Insights: Value of Free Trials
LinkedIn is a complex ecosystem that hosts many subproducts, such as a social feed, notifications, member profiles, jobs, and online learning courses.LinkedIn has four companywide metrics and many more productspecific metrics that are sensitive to changes within a subproduct.A question that often arises is, are the product-specific metrics good surrogates for the company-wide metrics?In other words, does optimizing for the product-specific metrics necessarily lead to the optimization of company-wide metrics?To answer this question, we use causal inference methods to understand how changes in a product-specific metric impact the company metric.These results help each product area to set better goals and develop more accurate metrics.
LinkedIn Learning offers online education through video content.There are a variety of ways for members to access these courses, including organization-provided programs, individually paid subscriptions, and free courses.The product-specific success metric is the number of members whose video watch time in the past 30 days exceeded a certain threshold-we call these engaged learners.To determine whether this is a useful metric, we assess how increasing it impacts companywide metrics, in particular, we focused on revenue generated through purchasing a LinkedIn Learning subscription.
We can compute a naive correlational estimate of the impact by comparing revenue between engaged learners and everyone else.The average revenue generated by engaged learners is 94% higher than nonengaged learners.This estimate is likely to be much higher than the truth.To obtain a more accurate one, we can use the results of past experiments that directly impacted product-specific metric (in this case, engaged learner) as an instrument.
Instrumental variable (IV) methods are ones that use a so-called instrument, a variable that affects the outcome, but only through changing the treatment (Angrist et al., 1996).In other words, an instrument has a direct impact on the likelihood of a user taking treatment, but does not directly impact the outcome.
The exogenous variation introduced by the instrument allows us to isolate an estimate of the causal effect even in the presence of unobserved confounders.This method is particularly attractive for firms that run randomized experiments, as these create a class of natural instruments.While randomized experiments measure the net effect of experiment assignment on the outcome, in many cases, assignment affects the outcome indirectly by encouraging some "treatment."If we are interested in the effect of this "treatment" on the outcome, regardless of how it is encouraged, we can use IV with the experiment assignment as the instrument.For a review of methods and applications, see Angrist & Krueger (2001).Table 2 shows the typical data structure, and Figure 2 shows the observation timeline.This experiment directly impacted the proportion of engaged learners (the treatment) but did not have a direct impact on the likelihood of member conversion to paid subscribers (the outcome), other than through affecting engaged learner status.The IV study established that becoming an engaged learner increases the probability of conversion to a paid member by 46%, only half of the size of the naive estimate of 94%.

Case 3. Business Ecosystem Insights: The Value of Contributions
The LinkedIn platform is built to facilitate interactions between members either through public contributions (posting, commenting, or sharing in a social feed) or private contributions (messaging another member).The firm's working hypothesis is that contributions are a starting point for conversations and increase long-term value for the member.Typically, we measure this value by the member's engagement level, which is quantified through metrics such as time spent on LinkedIn.
Although the working hypothesis seems plausible, it does raise some questions: What is the impact of public contributions on subsequent member engagement?Which of the different member contributions drives retention the most?What is the relative importance of public versus private contributions?Answering these questions using an experiment is incredibly challenging because it is difficult to develop interventions that directly elicit member contributions; the member inherently decides which treatment to adopt.Even if we can modulate contribution behavior through user-interface changes, it is difficult to find such an experiment that does not also directly impact many other behaviors, such as scrolling patterns.1. Banner Ad.The member is notified of the promotional trial period via a banner ad; this prompts them to start learning right away.
2. No Banner Ad.The member is not shown the banner ad but has access to the promotional trial.
A naive approach that directly compares members who like a post to those who do not suggests that liking a post increases the likelihood of returning to LinkedIn in the following week by 80%.Casting this problem in the cross-sectional causal inference framework outlined in Case 1 yields an estimate of 34%, which represents a 57% reduction from the correlational estimate.However, we are still dropping a substantial amount of useful information.
Carefully examining our data, we notice that we observe each user taking multiple actions and see subsequently how their engagement changes in response to their behavior.This generates what is known as a panel data set; see Table 3 for an example of the data structure, and Figure 3 for the observation timeline.
Notice that cross-sectional data is a particular case of panel data, with a single time step.Another important special case is when a single unit is observed over multiple time steps; this is known as a time series.In the next case study, we describe a specific application of causal inference for interrupted time series.
There are multiple strategies for analyzing panel data sets, see Table 3, and extracting causal effects that come with different assumptions.At a minimum, all strategies require that the outcome is observed after the treatment to ensure that the intervention can affect the outcome.The addition of a time component allows us to control for unobserved fixed confounders as we observe each user over time and see them taking both control and treatment-in a sense, each user can act as their own control (Imai & Kim, 2019).
The applicability and reliability of these methods depend on the validity of the underlying assumptions and should always be combined with detailed diagnostics and sensitivity analysis methods (see, e.g., Chamberlain, 1982;Imai & Kim, 2019;Robins & Hernan, 2008;Sobel, 2012).To improve our estimate, we used a weighted linear fixed effects model, described in Bojinov et al. (2019).
The model allows each user to act as their own control, and the weighting further improves the performance by reducing the discrepancies between the treatment and control units.From our analysis, we concluded that the less engaged members see the most significant gain from contributing.If we had used a purely correlational analysis, we would have overestimated the effect by more than 75% and falsely concluded that highly engaged members have similar benefits.The magnitude of the estimated effect was also much closer to what we observe from typical experiments.

Case 4. Uncontrolled Rollout: Marketing Campaigns and Mobile App Preloads
Experimentation allows us to measure the impact of innovations as we control their release to our members.
However, there are times when we are unable to run an experiment because we have no control over which members are exposed to the treatment.Uncontrolled rollout occurs in situations such as in marketing campaigns, 2 new mobile application releases, and mobile app preloads. 3LinkedIn, for instance, regularly targets specific cities with brand marketing campaigns involving both physical (billboards, radio, and television) and digital marketing channels to increase engagement.Similarly, LinkedIn uses mobile application preloading to increase sign-ups and engagement.Both use cases raise an essential question: What is the return on investment for these marketing activities?
In this section, we analyze marketing campaigns and the impact of app preloads using a fourth type of observational study format.
Interrupted time-series arise when we track an outcome of interest before and after an intervention (Lopez Bernal et al., 2016); see Table 4 and Figure 4 for the typical data structure and observation timeline.The classic example is a marketing campaign launched in a city.The effectiveness of these campaigns is hard to assess through standard experiments because randomization at the member A key feature of panel data is that each member is observed taking different treatments over time, followed by a measurement of their response.
level is impossible.Randomization at the city level results in a small number of units, resulting in little power to detect effects without using more advanced tools.
Synthetic control methods are a subclass of interrupted time-series methods that we have found to be particularly useful when there are data on other units that were not affected by treatment (Abadie et al., 2010).In the marketing example, these could be the outcome metric in cities with no marketing campaign.
Synthetic control methods allow us to use a time series of observed outcomes before the intervention to generate a control group that is comparable to the treatment group under a particular model.We then estimate the causal effect by contrasting the observed time series against a counterfactual estimate of how the outcome time series would have evolved without the treatment, inferred from the synthetic control.
For a popular method, see Brodersen et al. (2015).These methods naturally facilitate impact assessment over time.One of them is the treatment's metric of interest.There can be any number of control time series, and the metric they measure is not required to be the same one as for treatment.First, we analyze the impact of marketing campaigns.LinkedIn's marketing campaigns are targeted at specific cities or geographical locations.A naive approach to estimate their impact is to count all traffic directed to LinkedIn from the marketing campaign (i.e., referred traffic).However, referral tracking is not always possible (especially for physical marketing campaigns), and even if it were, not all this traffic is incremental, as some members may have visited LinkedIn regardless.Another naive approach is a pre-post comparison of the metric value before and after the campaign.However, this conflates the natural change over time with the effect of the campaign itself.
A better approach is to analyze the data using an interrupted time-series method.We typically use synthetic control methods to measure the aggregate impact at the city level.The treatment region is the city exposed to the marketing campaign, and control regions are cities with comparable characteristics (e.g., sign-up penetration, member engagement, among many others) that were not exposed.Compared to the naive pre-post estimate, the synthetic control estimate can be higher or lower depending on time-series properties.In one such analysis, the naive approach estimated 20% lift, while the synthetic control estimate was 11%.The large causal impact influenced our decision to expand marketing to more cities.We identified demographic features of cities that responded well to our campaigns to inform strategic targeting of the next campaign.
On a practical note, even though this process does not require referral tracking, it does require having suitable controls.Marketing is an ongoing process, so it is important to launch campaigns in an orderly way that facilitates ongoing data collection and impact measurement.If cities are treated in a haphazard fashion, then it can be difficult to find controls that are not affected by intervention in the measurement period.
Next, we analyze the impact of mobile application preloading.A naive correlational study directly compares revenue from members who use a preloaded app against those who do not.However, along with typical confounding issues, some members who use the preloaded app would have installed it on their own, so the correlational study does not accurately measure the incremental revenue caused by the preload.
Just as in the marketing example, the data structure resembles an interrupted time-series, and so we again used synthetic controls to estimate the causal impact more accurately.Treatment comprises members who used a preloaded app, and we measure the revenue they generate over time.Our goal is to measure incremental revenue to assess return on investment.Rather than measure the impact on treatment as a whole, we split the treatment into multiple cohorts that past ecosystem analysis showed have different monetization values (segmented by previous app installation status, geographic region, etc.).We performed a separate analysis for each cohort and combined the final results in a meta-analysis.Modeling at the cohort level is similar to matching for cross-sectional data.It improves our model accuracy and yields insights into how value differs by cohort.After defining the treatment cohorts, we defined the control cohorts.The ideal controls are members that were just as likely to be exposed to preload but were not.One approach would be to use propensity matching to identify a similar control set.We opted for a more straightforward approach, defining control cohorts that mirror the ones in treatment.For example, the treatment cohort with 'First time app users on Android, in the United Kingdom' was matched to a control cohort of United Kingdom members who never used the app before.We used an interrupted time-series method to estimate the causal effect.
The results were delivered to our business development teams as a self-serve calculator that estimates the return of a potential preload partnership according to its targeting criteria (e.g., geographic region).Business deals could be negotiated to break-even within a certain time horizon.Had we used naive correlation, our price targets would have been 50% to 250% higher than the true value, depending on the cohort.Causal analysis eliminated the guesswork so LinkedIn could negotiate preload deals with confidence.
In both these examples, observational causal methods deliver ongoing support to our marketing and business development teams to enable smart decision making in how we spend money to build our brand and engage our members.Moreover, as they rely on fewer 'guesstimates' than naive baselines, they give executives confidence that the marketing budget will be well-spent.Observational causal inference is the method of choice for accurate impact assessment of uncontrolled rollouts.

Organizational Adoption: Practical Lessons From LinkedIn
Observational causal studies provide an important class of tools for making well-informed data-driven business decisions; unfortunately, data scientists in many firms struggle to apply these in a business context.At LinkedIn, we identified three central components for building a culture that adopts and benefits from observational studies: education, automation, and certification.

Education
Data scientists.While many data science degree programs offer courses on causal inference, because of the breadth of the field, most new hires know little about experimental design and even less about observational studies.At LinkedIn, we created an internal education program, supplemented by external content, to develop causal evangelists who can then educate others.Because internal experts understand both domain context and statistical techniques, they are uniquely equipped to help teams apply methods for practical applications.Our training sessions covered when to use observational causal inference, the assumptions of different methods, proper analysis design, and how to choose the right method for the problem.To supplement our internal employee development, we look more broadly across disciplines when hiring, focusing especially in fields that deal with observational data, such as the social sciences.
Leaders.Data scientists cannot run observational studies in a vacuum.Leadership support is essential for ensuring adoption by decision makers, coordinating high-quality data collection across teams, and supporting the necessary resource allocation.This can be done by focusing on champion use cases to demonstrate value and drawing on real-life illustrations of the dangers of mistaking correlation for causation.For instance, correlation shows, counterintuitively, that asthmatic patients have a lower probability of dying from pneumonia than nonasthmatics.But it would be wrong to conclude that the risk of death is lower for asthmatics; because of their high risk, they receive better treatment, leading to the surprising result of better outcomes (Caruana et al., 2015).
At LinkedIn, we proved the usefulness of observational causal studies to the business by answering a few toppriority strategic questions and comparing the results to correlational studies that would have yielded the wrong investment decisions (for example, Case Study 4).We also demonstrated that observational studies could yield results consistent with a randomized experiment, 4 and therefore the results of a well-designed study could be reliably used for decision making.Because of our education efforts, LinkedIn's employees are now aware of the difference between correlation and causation, and that there is another option besides correlation and randomized experiments.Now, both data scientists and business leaders are quick to ask whether claims are 'causal' and advocate for rigorous, accurately communicated insights.

Automation
Even with a trained workforce, it takes significant time to design and run a proper observational study.In our experience, the first iteration of a study takes 2-3 weeks, followed by additional iterations to pass diagnostic tests.Analysis time depends on the complexity of the problem and the scale of the data, but when the cost is too high, few data scientists have the time to run observational causal studies.
At LinkedIn, we dramatically decreased analysis time while increasing quality by automating portions of the work.The main bottleneck in most observational studies is data collection, as this requires joining multiple data sets from different sources while carefully tracking the correct time index (or timestamp).The data sets for observational causal analysis typically have a similar structure: unique unit IDs, timestamps, treatment labels, response metrics, and confounders.Automating the data join overcomes one of the main hurdles to the adoption of observational studies.It also brings privacy advantages as individual data scientists do not need access to the original data.
To further improve reproducibility, productivity, and trust, we built a causal inference web platform, hosting all four categories of methods described in the case studies.Data scientists can execute an analysis at a click, with backend automation of computation and validation (Figure 5).Data scientists specify the analysis configuration (such as method, treatments, outcomes, and features) through a user interface, and the platform handles the data joins, analysis, and validation.The user interface guides the data scientist to develop a proper design that, for example, satisfies the temporal ordering of treatment and outcome.It also simplifies the process to refresh a previous analysis and quickly iterate on a new one: users can rerun any analysis with adjusted parameters.
Automation reduced the time it takes to create the first iteration from a few weeks to a few hours.Furthermore, centralization streamlines the review process and ensures high-quality analysis by integrating diagnostics and validation tests.Finally, the democratization of causal methods has enabled us to build a knowledge repository that simplifies the discovery of new insights through web pages that display analysis design, result, and approval status side-by-side.These web pages can be shared, searched, and organized.Thus, the observational causal platform serves not only as an analysis platform, but also as a repository for causal relationships and ecosystem insights.
The development team, consisting of a small group of data scientists and software engineers, owned the platform and methodology development.Other data scientists can build into the platform directly, with code review and guidance from our development team.For instance, the data scientists working on the brand marketing use case created a custom user interface for input and result visualizations.Our approach of developing a comprehensive platform while enabling customization for the top use cases met the general and specific needs of data scientists to facilitate adoption.
Like any automated statistical tool, without careful guardianship, there are ample opportunities for abuse.
Observational studies are particularly prone to misuse as they rely on strong assumptions, some of which are unfalsifiable from the observed data alone.They often need in-depth domain knowledge to create a good design.Education and automation both act as safeguards to inform the proper design, but not every analysis run on the platform is guaranteed to be accurate.That is why we decided to rely on human certification to ensure that only valid results are called causal.

Certify
It is dangerous to assume correlational results are causal; it is even more hazardous to place confidence in poorly designed observational studies.To uphold a high standard, we established the Causal Data Analysis Review Committee to certify causal analyses and ensure the proper interpretation is communicated to business leaders.The committee holds office hours to help data scientists with analysis conception, study design, and result interpretation.During this process, we carefully assess the validity of assumptions, check the design, and ensure there was no abuse (akin to p-hacking or data-dredging) by examining the full history of analyses on the platform.Data scientists can request a review through our web platform.The approval status is then displayed within the report so that it is clear which results can be trusted and communicated to stakeholders.
The certification process also allows us to ensure that the results are properly communicated and interpreted in the business context.We typically require data scientists to present estimates along with confidence intervals and model assumptions in simple business terms (e.g., the population used for the study, the features included in the model, and limitations).In this way, without going into technical details, decision makers can understand the boundaries of the analysis.For some studies, notably opportunity sizing, we further push data scientists to think hard about the external validity of the estimates.Often, simple adjustments to align the composition of the sample to the population that will receive any future interventions will improve the quality of the study.
One challenge that certification cannot fix is that as results are broadly shared, problem-specific nuances are less understood, and as a result, there is a tendency to remember a single estimate without the details.Another problem is that users may drop 'observational' and instead say, 'a causal study showed…'.To tackle both of these issues, we carefully educate stakeholders on what the 'causal' label means.We emphasize that observational studies cannot demonstrate causality as convincingly as an experiment, even though they provide a significant improvement over correlational studies.It is helpful to show the simple pyramid diagram in Figure 6.
Although governance for certification and communication adds friction, it is vital to building trust.The committee is currently composed of reviewers from the development team, an impartial horizontal data science team with no personal stake in whether the result is positive or negative.As the firm matures, the certification process can be democratized.We have begun to scale the process by adding members from vertical data science teams to review analyses from other verticals.Through trustworthy results, we foster a culture that believes in following the evidence from data.
providing feedback on this article; Ya Xu, Parvez Ahammad, and Dan Antzelevitch for their continued support and investment to the observational causal studies initiative.We would also like to express our gratitude to users of the causal inference platform who provided us with many insights for improving it.Finally, we are immensely grateful to the HDSR editorial team for their comments on earlier versions of the article.contribute much to the total metric value.By aggregating cohort-level effects into the overall effect, our estimate of the causal effect (3.2% to 4.3%) overlapped with the value reported by the randomized experiment (3.7%).↩

Figure 1 .
Figure 1.Cross-sectional timeline.First measure the covariates, then the treatment assignment, and finally the outcome or success metrics.

Figure 2 .
Figure 2. Instrumental variable observation timeline.First measure the covariates, then the instrument, then the treatment assignment, and finally the outcome or success metrics.

Figure 3 .
Figure 3. Panel observation timeline.A key feature of panel data is that each member is observed taking different treatments over time, followed by a measurement of their response.

Figure 4 .
Figure 4. Interrupted time-series observation timeline.The input data set for the interrupted time-series method consists of multiple time series over the same time period.One of them is the treatment's metric of interest.There can be any number of control time series, and the metric they measure is not required to be the same one as for treatment.

Figure 5 .
Figure 5. Doubly Robust analysis design pageVideo -click to play

Figure 6 .
Figure 6.Pyramid diagram for the types of studies.

Table 2 .
Instrumental variable data structure.
Nonsubscribers are sometimes given free access to learning courses for 24 hours.One experiment aimed to increase awareness of the free course access by notifying members in-product.The experiment randomly assigns members into two versions:

Table 3 .
Panel data structure.

Table 4 .
Interrupted time-series data structure.