Skip to main content
SearchLoginLogin or Signup

2024: A Year of Crises, Change, Contemplation, and Commemoration

Issue 6.1 / Winter 2024
Published onJan 31, 2024
2024: A Year of Crises, Change, Contemplation, and Commemoration

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Jan 31, 2024 ()
  • The latest Release (#3) was created on Feb 22, 2024 ().

The editorial for the first January issue of Harvard Data Science Review was titled “2020: A Very Busy Year for Data Science (and HDSR)(Meng, 2020). At that time, I envisaged adventures such as predicting the 2020 U.S. elections and implementing differential privacy in the 2020 U.S. Census. Little did I know a global COVID crisis was already unfolding. The dire need for reliable data science during the COVID-19 pandemic significantly broadened my initially U.S.-centric examples and unexpectedly granted me bragging rights for what seemed like prophetic predictions. However, aside from those with malevolent intentions, none of us would wish for a crisis that impacts people’s lives and livelihoods, irrespective of its timing or location.

Sadly, for this fifth January issue of HDSR, I find myself compelled to include ‘crises’ in the title. It's no longer a prophetic proclamation but a somber reflection of our current reality. To drive this point home—figuratively speaking but literal in this context—Harvard University commenced 2024 with what my school dean described as a “seismic event” in a message to the faculty (Hoekstra, 2024). As many of you may already know, our new president resigned on January 2, 2024, only six months into her term (Gay, 2024). I wish I could simply be accused of exaggerating a local issue, making it seem Harvard-centric; however, the harsh reality is that this local upheaval is intimately related to a conflict over 5,000 miles away, steeped in extensive history and with an unpredictable conclusion.

The year 2024 will mark another U.S. presidential election. Considering the public's eroded trust in polling during the 2016 and 2020 elections,1 it's hardly a bold prediction to say that political pollsters will face yet another crisis in regaining public confidence (see Bailey, 2023; as well as Sanders et al., 2023). More alarmingly, a far greater crisis is brewing. In the January 2024 episode of the HDSR podcast, “In God We Trust: Everyone Else Must Bring Data or Liberty,” my co-host Liberty Vittert highlighted the “doomsday” anxiety felt by voters of both the Democratic and Republican parties. Each faction fears that an election victory for the opposing side would signify “the fall of democracy” (Meng & Vittert, 2024). Regardless of the validity of these concerns, the distrust and animosity that half the population harbors towards the other is a crisis for any society.

In the podcast, I asked Liberty, a journalist and columnist, how she managed to be featured by media outlets across the ideological spectrum, particularly during such polarized times. Her response, “I always try to have the backing of the data,” should serve as a daily mantra for everyone who identifies as a data scientist (Meng & Vittert, 2024). Crises tend to disorient people and heighten tensions. Data scientists, pivotal in advancing human lives and livelihoods, possess both the means and responsibility to help alleviate these tensions and tackle crises. We do so by collecting and analyzing the most truth-seeking data and information, and by effectively and scientifically communicating our findings to the public and stakeholders.

Crises force us to think more deeply and act more swiftly, just as disruptive technologies and advancements do. The articles in this issue 6.1 of HDSR document a wide range of responses from data scientists and educators to such seismic events, both metaphorical and, in one case, literal.

Harnessing Crises for Groundbreaking Advances

In this issue's Effective Policy Learning column, the article “The Groundwater Crisis: The Need for New Data to Inform Public Policy” by column co-editor Nick Dudley Ward (2024) is an mind opener to me. The impact of climate change on groundwater and the significant challenge of uncovering underground data had never crossed my mind. Indeed, in my adulthood, I had never considered groundwater as a potential crisis until reading this article. (As a child growing up in Shanghai during the 1960-1980s, I was both fascinated and perplexed by reports that artificially recharging groundwater could prevent Shanghai from succumbing to rising sea levels. Revisiting this topic now, a recent article in Down To Earth [Yin & Yu, 2023] has effectively answered the puzzle of my childhood.)

Fittingly ironic, the groundbreaking advance to the groundwater crisis stemmed from another crisis: a true seismic event, the massive Christchurch earthquake in New Zealand during 2010–2011. In its wake, thousands of piezometers, devices for measuring groundwater pressure and level, were installed. This monitoring was crucial, because “Water content is a critical ingredient in determining the strength of a soil” (Dudley Ward, 2024). And it also creates a much-needed byproduct: the New Zealand Geotechnical Database (NZGD).

As Dudley Ward (2024) reports, the NZGD has provided invaluable technical insights into the complexities of Christchurch’s shallow groundwater system. From its high spatial heterogeneity to its great sensitivity to external factors like rainfall, the data have been revelatory. Moreover, the database catalyzed a paradigm shift in the industry, leading to a move toward open data. Before the NZGD's establishment, geotechnical data was often ‘hoarded’ by engineering consultancies. This shift towards democratizing data in itself is a welcome advancement. (Indeed, in the coming March, HDSR will dedicate a special issue to the topic of democratizing data.)

Turning Crises Into Catalysts for Educational Reform …

Back in 2017, I was invited as a panelist to the symposium, “Scientific Method for the 21st Century: A World Beyond p < 0.05,” organized by the American Statistical Association (ASA). This symposium was part of the statistical community's response to the so-called replication crisis in science, also known as replicability or reproducibility crises (though the two terms have distinct meanings, as defined by the National Academies of Sciences, Engineering, and Medicine’s [NASEM] report [2019]). The crisis refers to a concerning number of scientific findings that could not be replicated or were proven false. A finding of this kind is also reported in HDSR (see Schuemie et al., 2020); HDSR has also devotes a special theme on reproducibility and replicability in conjunction with the NASEM report.

Scientific findings, particularly empirical ones, are inherently prone to errors—that's the nature of scientific inference. Hence, we rely on 95% confidence intervals rather than the certain but useless 100% confidence intervals. However, when a significant portion of findings turn out to be erroneous, as Schuemie et al. (2020) reported, the sense of crisis and the urgency for change become palpable. If a manufacturer discovered that 50% of its products were defective, its stakeholders would undoubtedly demand immediate action.

Indeed, the panel I was asked to serve on called for “The Radical Prescription for Change” (Gelman et al., 2017). Whereas this is not the place to brag how radical (or conventional) my proposals were, I’m glad to see that two proposals there (Meng, 2023) share the same aims respectively with two articles in this issue (Case, 2024; Peer, 2024). My most audacious proposal involved introducing children to stochastic thinking through 'kidstograms'—histograms for kids. I posited that engaging kids with data collection and visualization might be more captivating and meaningful than merely learning about arithmetic, because the former is more participatory and pictorial. Such engagements help to foster early appreciation for the complexity of the world and hence to prepare developing brains for nuanced, stochastic thinking.

Catherine Case’s (2024) article “Teaching Introductory Statistics in a World Beyond ‘p < .05’” in the Minding the Future column of this issue of HDSR echoes this sentiment. She advocates moving away from simplistic black-and-white rules in determining statistical significance, in direct response to the ASA symposium’s call for major changes in statistics education (Wasserstein et al., 2019), as one might surmise from her article title. Case’s approach of integrating qualitative aspects—like exercising judgment, contemplating assumptions, and interpreting results—into statistical learning is timely and has broader implications. It elevates these often-secondary considerations in our current procedure-centric curriculum to the forefront, enhancing scientific and statistical reasoning skills.

… and Research Innovations

Another proposal I made was the ‘self-executed pufferfish test’ (Meng, 2023). This analogy draws from the practice where chef trainees preparing the partially poisonous pufferfish must first consume their own dish, ensuring utmost diligence and care. Similarly, if researchers applied such diligence and weren’t rushed by the current publication-driven reward systems, we might see a significant decline in false claims. Unfortunately, our current incentives often do not foster such individual accountability.

Thus, reading “Why and How We Share Reproducible Research at Yale University’s Institution for Social and Policy Studies” by Limor Peer (2024) in our new column on Reinforcing Reproducibility and Replicability was professionally gratifying. I urge readers in research institutions to share it with their leaders, regardless of their research areas. Whereas each institution surely should consider its institutionality, the Yale model provides a tested template, as well as aspirations and inspirations. I, for one, was inspired by Peer’s emphasis on aiming high by considering additional standards such as independent understandability and long-term reusability, which align well with the calls from open research and information science experts (e.g., Borgman, 2019; Pasquetto et al., 2019).

Crises Prompt Deep Contemplation …

The COVID-19 pandemic has compelled many scholars and researchers to engage in broader considerations and deeper contemplation. Two exemplary articles from HDSR highlight this: “Tackling COVID-19 Through Responsible AI Innovation: Five Steps in the Right Direction” by ethicist David Leslie (2020), and “Data Science in Times of Pan(dem)ic” by philosopher Sabina Leonelli (2021).

Another thought-provoking piece in this current issue, “When Data Science Goes Wrong: How Misconceptions About Data Capture and Processing Cause Wrong Conclusions,” from the Diving into Data column and written by Peter Christen and Rainer Schnell (2024), begins by documenting consequential errors made during the COVID-19 pandemic due to mishandling of data. This article is particularly timely and necessary for both the broader data science community and the general public. This is because it addresses misconceptions that occur before data analysis or machine learning, which are often the focus of much journal publication and media attention.

Specifically, Christen and Schnell (2014) list 23 misconceptions due to data capturing, 6 from data processing, and 9 from data linkage. A pervasive misconception is the belief that software used in these processes is error-free. While software companies strive for reliability, no software is shielded from potential errors. A vivid example, involving life and death, was a computer error that displaced the last letter of ‘Hartford’ into a column which indicates whether a person is ‘d’, for dead (The Associated Press, 1992). Luckily this was only a mini-crisis for a court, since it was a jury selection software, and few Hartfordonians had complained about not being called upon for 3 years. But one could well imagine what crisis it could create in some other context when a software wipes out an entire city, virtually. This incident also highlights how computers, while capable of tasks beyond human reach, can also amplify and spread errors at an unprecedented scale (and speed).

The recent pandemic has also propelled advances in data analytics, both in methodology and application. The HDSR special issue on COVID-19 showcases over a dozen such studies. Such research certainly continues, with deeper probing, as documented in this issue in “Assessing the Prognostic Utility of Clinical and Radiomic Features for COVID-19 Patients Admitted to ICU: Challenges and Lessons Learned” by Sun et al (2024). This study, a collaboration among a large team from the University of Michigan and from Harvard’s T. H. Chan School of Public Health, reflects the collaborative effort and rigor needed for such studies, especially with messy data.

Data are particularly messy when we have little time or energy to carefully design and execute their collection or processing, such as avoiding and reducing incomplete observations, selection bias, measurement errors, data incompatibility, and so on. But that is the very situation during the COVID-19 pandemic and many other natural or human-made crises.

I am therefore grateful to the authors of Sun et al (2024) for documenting the entire data science process, from forming the questions to contemplating clinical implications, without them being overshadowed by methodological contributions. I particularly appreciate their emphasis on challenges and lessons learned, and with several simulation studies to test the limitations of the methodologies employed. These deeper probes are difficult to pursue during the peak of a crisis, yet they are crucial for preparing us for future crises.

… so Do Scandals

The crisis of public trust in political polling from the 2016 U.S. elections was not the only data-related adversary event. Carina Albrecht’s (2024) article in the Minding the Past column, “Discerning Audiences Through Like Buttons,” revisits the Cambridge Analytica scandal. Like the article on groundwater crisis, Albrecht’s article, tracing the history of the ‘like buttons,’ is another mind opener for me. I knew that expressing like or dislike on social media is not merely a pastime, but I had not thought about how powerful that is as a “data generation machine(Albrecht, 2024, emphasis in original). Albrecht reminded us of this fact via a fascinating historical account, revealing the origin of the like button and its connections with lie detectors.

In particular, the Program Analyzer developed in 1930s, recording in real time listeners’ favorite (like) and unfavorite (dislike) reactions to radio programs, is the predecessor of the Perception Analyzer, which allows more granularly responses via a (digital) dial button, as used today by news networks during political debates or speeches (Albrecht, 2024). However, the research contemplation over time has not been about technology advances, but rather about what these devices actually measure about listeners’ or participants’ reactions.

As Albrecht (2024) noted, whether Cambridge Analytica’s voter profiles by linking their ‘likes’ to personal traits had affected the 2016 U.S. presidential election outcome is still open for debate. To answer that question meaningfully would require a counterfactual contemplation: How would the election results differ had the Cambridge Analytica scandal never took place? Whereas it seems impossible to answer such causal questions, historically causal contemplation was at the core of deriving actionable insights from listener reactions. As Albrecht pointed out, the inventors of the analyzers realized early on that “discovering the cause of the listener reactions was one of the major problems of radio audience studies; hence, data about only the likes and dislikes counts were insufficient” (2024, emphasis in original).

Causal inference, a central theme in this discussion, is among the most debated topics across disciplines. A paramount issue is to identify genuine causal relationships among suggestive associative relationships, which are much easier to establish, especially statistically. The beautifully presented article on “Causation, Comparison, and Regression” by Ambarish Chattopadhyay and José Zubizarreta (2024) in this issue is an essential read for anyone who wishes to gain deeper understanding of the plausible—and often slippery—paths going from correlations to causations.

Additionally, I’m excited to introduce HDSR’s newest column, Catalytic Causal Conversations, co-edited by Iavor Bojinov and Francesca Dominici. Their inaugural piece, “Causal Inference for Everyone(Bojinov & Dominici, 2024), aims to make causal inference topics accessible to a broader audience. Communicating complex topics and deep ideas effectively and broadly is increasingly an essential skill in the digital age. I am therefore particularly happy to see the column’s plan to invite postdocs and doctorial students to take a lead in writing some articles, under the guidance of faculty co-authors.

To simulate deeper and broader contemplations concerning causality, let me conclude this part of the editorial by challenging readers with a thought exercise. Consider the aforementioned ‘like button’ column article (Albrecht, 2024), where the column editor wrote, “Albrecht’s history reveals that the like button is best understood not as a passive recorder of preexisting affect and sentiment, but rather as a data technology that generated the emotive effects the button claimed to measure.” How would one design a study to scientifically assess whether—and to what degree—listeners’ sentiment causes their reactions as recorded by the like button, or whether it is the other way around, as the column editor implied?

Contemplating and Commemorating Paradigm Shift …

The term ‘paradigm shift’ or ‘paradigm change’ is often overused in academia, sometimes for boosting ego and other time for over-the-top accolades (such as in lauding a retiring colleague). Yet, genuine paradigm shifts do occur, particularly with the advent of disruptive technologies. The emergence of ChatGPT and, more broadly, large language model (LLM) based generative AI, undeniably represents such a shift. In the coming months, HDSR will launch a special issue titled “Future Shock: Grappling With the Generative AI Revolution,” delving into the profound impacts of this technological upheaval.

In this current issue, the article “What Should Data Science Education Do with Large Language Models?” by Xinming Tu, James Zou, Weijie Su, and Linjun Zhang (2024) offers an insightful preview into the ongoing educational paradigm shift. The authors astutely observe, for example, that generative AI is effectively transforming software engineers into product managers, necessitating a paradigm change in data science education. They advocate for a broader skill set in education, including strategy planning, resource coordination, and product lifecycle management—areas rarely emphasized in the current data science education. The article's thoughtful articulation of visions, challenges, and potential solutions is particularly commendable, given the authors' early career stages, with the lead author being a PhD student. Such shifts in education are obviously more impactful for younger and future generations. Hence it is extremely fitting and timely to have many young talents to lead such educational paradigm shift, and to promote and commemorate their perspectives and contributions.

Another paradigm change has already occurred, that is, in data privacy protection. As mentioned earlier, 2020 saw the introduction of differential privacy (DP) in the U.S. decennial census. For a comprehensive understanding, readers are invited to browse or dive into HDSR’s special issue on DP in the 2020 U.S. Census, which details the methods and algorithms adopted by the U.S. Census Bureau, the reactions to and debates about the use of DP for census data, and historical and current contemplations regarding the utility and privacy of the census data.

The landscape of this paradigm change is further explored in the leading article in issue 6.1, “Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment,” co-authored by two dozen researchers from industry (Google, LinkedIn, Meta, Microsoft, etc.) and academia (Columbia, Melbourne, Princeton, Sun Yat-sen, etc.) (Cummings et al., 2024).

This article, born from a 2022 workshop on DP, provides a comprehensive overview of challenges in deploying DP, of building effective DP infrastructure, and of fruitful research directions. It is an extremely rich and thought-provoking article, a resource based on which an entire course on data privacy can be built. I particularly appreciate the development-deployment intertwined nature of the article, and how it concludes with a thought-provoking and action-prompting discussion about how to communicate the concepts and guarantees of DP to a variate of stakeholders, from business managers to lawyers and to policy makers. Such integrated and intertwined investigations and contemplations make the whole enterprise of data science significantly more than the sum of its parts, and that is truly the essence of having the encompassing notion of data science, not merely as an academic ontological term.

Another extremely thought-provoking and timely master piece, “Data Science at Singularity” by David Donoho (2024), posits that data science enterprise itself is undergoing a significant paradigm shift toward “frictionless reproducibility,” accelerating the spread of ideas and practices with unprecedented speed and scale. Whether one (fully) agrees with Donoho's thesis or not, it should be a very simulating read for anyone in the data science world and beyond. For further context, I recommend pairing it with Donoho’s 2017 article, “50 Years of Data Science.” Both articles also come with commentaries from a wide-range perspectives, which make them a particularly rich set of tandem reading. Perhaps accompanied by your favorite beverage, the reading will stimulate a holistic reflection on the evolution of data science, with all its glories and gories.

… and the HDSR’s Transition

As we step into 2024, HDSR needs to transition from its initial five-year launch phase into a dynamic growth period. This pivotal shift, while not a paradigm change, invites deep introspection and forward thinking. It's a time to build on the confidence gained from our achievements and to embrace the changes informed by the lessons learned.

Since our inception on July 2, 2019, the collective efforts of authors, reviewers, editors, and staff have culminated in over 430 publications, encompassing 18 regular and 3 special issues as of December 2023. Our podcast, celebrating 3 years, has enriched our outreach with more than 36 episodes covering a diverse range of topics. From the depths of data science to the vastness of space (as featured in our “Are We Alone?” episode), HDSR has largely explored everything under and above the sun in the data science world. Winning the 2021 PROSE Award for Best New Journal in Science, Technology and Medicine was a testament to our commitment to 'Everything data science.'

Yet, the journey toward 'Data science for everyone' is still at its aspiration stage.

Our global digital footprint is almost complete, with readers in every country but one. (Can you guess which one?) Despite this reach, accumulatively HDSR has only attracted about 1.3 million unique users, a statistic surely implies significantly fewer individual readers, since ‘unique users’ really means distinct IP addresses. (Here is a much harder puzzle and one of the hardest data science challenges: How does one estimate the number of actual readers from the ‘unique user’ statistics, and how does one qualify and quantify the meaning of ‘reader’?)

To elevate global awareness of our open access content, the HDSR boards are brainstorming and contemplating a host of what we call HDRX initiatives and partnerships. These include HDSA(wards), HDSB(ooks), HDSC(ommunity), and the alphabet continues to HDSW. (Another tease: What does W stand for?) Your support in spreading the word about HDSR, particularly globally and to those eager yet hesitant about data science, is invaluable and hence deeply appreciated. And please don’t forget to mention that over 75% of the articles in HDSR have no Greek letters in them (you can verify that by inspecting this and other issues).

Thank you for your continued readership (from single or multiple IP addresses) and support in sharing our vision. Together, we can democratize data science knowledge for all, transcending barriers and inspiring learning across the globe, and into the 22nd century.


I am deeply grateful for Amara Deis’ skillful editing under great time pressure. To thank the readers for reading my editorial to the very end, I will autograph an inaugural volume of HDSR for the first five readers (who have no affiliation with HDSR) who send the correct answers to both teasers (which country is HSDR not accessed and what does the ‘W’ in HDSW stand for?) to [email protected]. The inaugural volume is also available for purchase at MIT Press. PLUS as an added bonus, your purchase will include a copy of the commemorative issue.

Disclosure Statement

Xiao-Li Meng has no financial or non-financial disclosures to share for this editorial.


Albrecht, C. (2024). Discerning audiences through like buttons. Harvard Data Science Review, 6(1).

Bailey, M. A. (2023). A new paradigm for polling. Harvard Data Science Review, 5(3).

Bojinov, I., & Dominici, F. (2024). Causal inference for everyone. Harvard Data Science Review, 6(1).

Borgman, C. L. (2019). The lives and after lives of data. Harvard Data Science Review, 1(1).

Case, C. (2024). Teaching Introductory Statistics in a World Beyond “p < .05”. Harvard Data Science Review, 6(1).

Chattopadhyay, A., & Zubizarreta, J. R. (2024). Causation, comparison, and regression. Harvard Data Science Review, 6(1).

Christen, P., & Schnell, R. (2024). When data science goes wrong: How misconceptions about data capture and processing causes wrong conclusions. Harvard Data Science Review, 6(1).

Cummings, R., Desfontaines, D., Evans, D., Geambasu, R., Huang, Y., Jagielski, M., Kairouz, P., Kamath, G., Oh, S., Ohrimenko, O., Papernot, N., Rogers, R., Shen, M., Song, S., Su, W., Terzis, A., Thakurta, A., Vassilvitskii, S., Wang, Y.-X., … Zhang, W. (2024). Advancing differential privacy: Where we are now and future directions for real-world deployment. Harvard Data Science Review, 6(1).

Donoho, D. (2017). 50 years of data science (with Discussions). Journal of Computational and Graphical Statistics, 26(4), 745–766.

Donoho, D. (2024). Data science at the singularity. Harvard Data Science Review, 6(1).

Dudley Ward, N. (2024). The groundwater crisis: The Need for new data to inform public policy. Harvard Data Science Review, 6(1).

Gay, C. (2024, January 2). Personal news. Harvard University.

Gelman, A., McNutt, M., Meng, X.-L. (2017, October 11–13). The radical prescription for change [Conference session]. ASA Symposium on Statistical Inference, Bethesda MD, United States.

Hoekstra, H. (2024, January 8). A message from Dean Hoekstra. Harvard University.

Leonelli, S. (2021). Data science in times of pan(dem)ic. Harvard Data Science Review, 3(1).

Leslie, D. (2020). Tackling COVID-19 Through responsible AI innovation: Five steps in the right direction. Harvard Data Science Review, (Special Issue 1).

Meng, X.-L. (2020). 2020: A very busy year for data science (and for HDSR). Harvard Data Science Review, 2(1).

Meng, X.-L. (2023). Double Your variance, dirtify your Bayes, devour your pufferfish, and draw your kidstrogram (with Discussions). The New England Journal of Statistics in Data Science. 1(1), 4–23.

Meng, X.-L., & Vittert, L. (Hosts). (2024, January 18). In God we trust: Everyone else must bring data or liberty (No. 37). [Audio podcast episode]. In The Harvard Data Science Review Podcast.

National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and replicability in science. National Academies Press.

Pasquetto, I. V., Borgman, C. L., & Wofford, M. F. (2019). Uses and reuses of scientific data: The data creators’ advantage. Harvard Data Science Review, 1(2).

Peer, L. (2024). Why and how we share reproducible research at Yale University’s Institution for Social and Policy Studies. Harvard Data Science Review, 6(1).

Sanders, N. E., Ulinich, A., & Schneier, B. (2023). Demonstrations of the potential of AI-based political issue polling. Harvard Data Science Review, 5(4).

Schuemie, M. J., Cepeda, M. S., Suchard, M. A., Yang, J., Yuxi Tian, Schuler, A., Ryan, P. B., Madigan, D., & George Hripcsak . (2020). How confident are we about observational findings in health care: A benchmark study. Harvard Data Science Review, 2(1).

Sun, Y., Salerno, S., Pan, Z., Yang, E., Sujimongkol, C., Song, J., Wang, X., Han, P., Zeng, D., Kang, J., Christiani, D. C., & Li, Y. (2023). Assessing the prognostic utility of clinical and radiomic features for COVID-19 patients admitted to ICU: Challenges and lessons learned. Harvard Data Science Review, 6(1).

The Associated Press. (1992, September 3). Court computer says all Hartford is dead. The New York Times.

Tu, X., Zou, J., Su, W., & Zhang, L. (2024). What should data science education do with large language models? Harvard Data Science Review, 6(1).

Wasserstein, R., Schirm, A., & Lazar, N. (2019). Moving to a world beyond “p < 0.05.” The American Statistician, 73(sup1), 1–19.

Yin, J., & Yu, D. (2023, May 16). Rising sea levels could swamp sinking Shanghai. Down To Earth.

©2024 Xiao-Li Meng. This editorial is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the editorial.

No comments here
Why not start the discussion?