Boston University is to be congratulated on the innovative M.S. in Statistical Practice (MSSP) program described by Kolaczyk et al. (“Statistics Practicum: Placing ‘Practice’ at the Center of Data Science Education,” this issue). Reading the description of their program brought to mind a 25-year-old quote from the American Association of Higher Education Bulletin:
Learning is not a spectator sport. Students do not learn much just sitting in classes listening to teachers, memorizing prepackaged assignments, and spitting out answers. They must talk about what they are learning, write reflectively about it, relate it to past experiences, and apply it to their daily lives. They must make what they learn part of themselves. (Chickering & Ehrmann, 1996).
The idea behind this generic quote can be adapted to learning statistical practice. Before doing so, I will outline one way to think about statistical practice—the statistical inquiry cycle described by Wild et al. (2018) in the International Handbook of Research in Statistics Education. The details appear to have originated with MacKay and Oldford (2000) and were described under a different name by Wild and Pfannkuch (1999). The statistical inquiry cycle includes five steps, with the acronym PPDAC. The steps of the statistical inquiry cycle are:
Problem: You can’t solve a problem without first defining what it is, and you shouldn’t try to solve a problem without questioning why you are doing it. This step should include a discussion of such issues as the ethical implications of possible outcomes and how the problem fits with existing knowledge.
Plan: This step includes such issues as possible sources of data, sampling design, what variables to measure, possible confounding variables, potential analyses, plans for data management and data privacy, and power analyses.
Data: This is the stage at which the data collection part of the plan is executed, and the data are prepared for analysis.
Analysis: At this stage the inquiry process should cycle back to the “problem” step, to make sure the analyses are appropriate for answering the questions of interest. It also should cycle back to the “plan” step to make sure the analysis doesn’t turn into an unethical data fishing exercise.
Conclusion: The conclusion stage includes more than simply reporting results. Discussion should cycle back to the problem defined in the first stage, the strengths and weaknesses of the plans in the second stage, any problems encountered in the data stage, and a summary of the analysis stage.
It should be clear from reading these steps that they are interrelated. The beauty of a program like Boston University’s MSSP is that students are able to experience these steps as a cohesive whole. Using these ideas, the spectator sport quote as applied to learning statistical practice might read as follows.
Learning statistical practice is not a spectator sport. Budding statisticians do not learn to practice statistics by watching their professors carry out disjointed pieces of the statistical inquiry cycle, or even by carrying out the pieces themselves as separate homework problems. They must experience the steps as a cohesive whole. They must work on a problem with a collaborative team, discover what can go wrong with data collection and analysis, discuss ethical issues, encounter setbacks, and ascertain how to present results to technical and nontechnical stakeholders. Only by experiencing the full statistical inquiry cycle as a cohesive experience will students learn to be astute practicing statisticians.
In a long ago decade (undisclosed) and a faraway place (Penn State University), I earned M.A. and Ph.D. degrees in statistics. The Penn State statistics program was excellent, and I took extensive coursework in both theoretical and applied topics. Statistical computing was in its infancy, but we had a computer lab in the department, and we learned to analyze relatively small data sets and answer questions about them. I was fortunate to be assigned to work on two different research assistantships with scientists in other disciplines, assisted by statistics faculty members. But that is not where I learned to be a statistical practitioner, because I was not involved in the full statistical inquiry cycle. My experience was typical of statistics graduate education at that time.
A few years after I graduated I had the extreme good fortune of working as a consultant under the tutelage of Brian Joiner, who was one of the early advocates for including statistical consulting as part of graduate training in statistics. He had extensive experience in statistical consulting, learning from W. Edwards Deming, among others, and after a successful career in academia he started his own consulting firm, Joiner Associates. He described his career in a talk at The W. Edwards Deming Institute in 2014. Interwoven with personal history, his talk traces the history of the role of statistical practice in graduate education in statistics in the last third of the 20th century.
The project Dr. Joiner and I worked on together was in collaboration with the California Public Utilities Commission (CPUC) and two utility companies in California. The CPUC wanted to know whether agricultural customers would reduce their electric use during peak hours if they were offered ‘time of day’ rates, with lower costs at peak hours. My work on the project followed the full statistical inquiry cycle, starting with the problem stage and working through to the final report on the results. I learned many lessons about how to be a statistical practitioner that could not have been taught in a classroom. For instance, I learned about the importance of ‘ecological validity’ when the representative for one of the two electric companies tried to turn the experiment into an interagency competition. He went out to the customers in his company and gave them substantial encouragement to lower their use during peak hours—something that would not happen in the future if these rates were implemented. We needed to explain to him that the experiment would have to mimic the future application of these rates if it was to be valid.
Over the course of my academic career I had many opportunities to observe students present results of data analyses. In our statistics graduate degree programs at the University of California, Davis, we had an oral qualifying exam that required students to analyze a data set. Without guidance, students inevitably failed to meet the standards of a good statistical practitioner. A typical presentation would start by showing the data, rather than by presenting motivation for what one hopes to learn from the data. Next, various models or analyses would be presented, with no justification for how they related to a scientific problem of interest. The presentation would then end with the student identifying the ‘best’ model, justified only by quantitative criteria. Usually they would be stunned by a question like ‘but what would you recommend to the client for how to proceed?’ After enough of these disasters the department realized that a course in statistical consulting would be a good idea. Personally I changed the way I thought about and taught data analysis to help students recognize that it was only one part of the statistical inquiry cycle.
All students who hope to practice statistics should have an opportunity to participate in the full statistical inquiry cycle as part of their education. What skills do they need in particular? In 2014 the Oceans of Data Institute (ODI) convened a panel of industry, government, and university experts to determine what should be included in the profile of a ‘big-data-enabled specialist.’ The profile was reviewed by over 150 ‘big data professionals’ and resulted in a list of recommendations for data science education. The Oceans of Data Institute focuses on integrating big data skills into STEM classrooms, and on working with community colleges to enhance workforce training. The list of abilities they compiled was geared to skills that are often ignored in workforce training. As a result, ODI started a mentoring program for community colleges to help faculty understand how to prepare students for the workforce.
The ODI profile defined a big-data-enabled specialist as “an individual who wrangles and analyzes large and/or complex data sets to enable new capabilities including discovery, decision support, and improved outcomes.” The panel outlined the day to day skills likely to be encountered by data science professionals, and listed the following aspects of their work that are not easy to convey in a typical classroom setting:
Managing data resources by complying with legal obligations, applying ethical standards, and protecting the data and results
Critically evaluating the results of analyses to determine the level of confidence in the results and estimate the precision and accuracy of answers
Telling a “data story” to convey insights, identify limitations, and provide recommendations based on the results of data analyses
These are the kinds of skills that could be and probably are developed in a program like Boston University’s MSSP. However, in the absence of a comprehensive program like MSSP, it is important to keep the full statistical inquiry cycle in mind throughout the education of statistical practitioners.
It takes a large investment of resources to start a new program such as Boston University’s MSSP. However, a much smaller investment is to start a course in statistical consulting. Anyone reading this who is part of a statistics graduate degree program that does not yet have a consulting course or capstone course should consider starting one. There are many ways to teach such a course, but ideally it should give students the opportunity to experience the full statistical inquiry cycle.
Here I describe the course I taught at the University of California, Irvine. The course is required for Ph.D. students, and offered as a possible elective for M.S. students. When I taught it in Fall Quarter 2017 there were 19 students, with a mix of M.S. and Ph.D. students. Most but not all of them were from the Statistics Department. The course prerequisites insured that students had at least a year of the applied coursework in our graduate program, so most of the students were in their second or third year. But the students had varying degrees of statistical and computing skills.
On the first day I set out the learning objectives for the course, stated as follows. By the end of this course, the student should be able to do the following:
Structure and run an effective meeting with a client
Align expectations with the client for the role of the statistician
Know how to obtain the necessary information for a successful client interaction
Formulate complex scientific questions as statistical problems
Solve statistical problems and translate the results into solutions for the client
Know how to avoid common errors made by statistical consultants (types 3 and 4; see Stallings, 2014)
Describe statistical methods and results to a nonstatistical audience
Compose reports for nonstatisticians, including an executive summary that outlines the scientific problem, statistical analysis plan(s), and potential limitations
Compose detailed reports on results that could be read by other statisticians
Understand more about career options in statistics, especially in industry
Notice that most of these skills are not acquired with traditional coursework. Yet they are the kinds of skills most of our students will need when they enter the workforce. We are on the quarter system, so I had only 10 weeks to accomplish these objectives. That required students to participate in a mix of hands-on team work, reading, and discussion. We also watched and discussed some interesting videos, described below.
Before the quarter began I arranged for five external clients who had projects requiring statistical help. Depending on your university consulting arrangement, this can be easy, or it can be the hardest part of teaching the course. Some of these clients were referred to me through our consulting center, which was still in an early development stage at that time, and I found others through collaborative work done by my departmental colleagues. The clients included faculty members from engineering, the medical school, and the dance department, a computer science graduate student, and a researcher working with a local city department of parks and recreation. With 19 students and five projects, I assigned teams of three or four students. I balanced the teams in terms of statistical background, and then asked each team to rank-order their project preferences. I was able to assign each team to their first or second choice.
The course met on Tuesdays and Thursdays for 80 min, with an additional 50-min meeting on Tuesdays. Occasionally, class time was reserved for teams to meet with their clients. When the class met together there were three different activities. Early in the quarter, before teams had worked with their clients, we watched and discussed a series of videos. Some of these were statistical consulting videos provided by Janice Derr (2000) with her excellent book Statistical Consulting: A Guide to Effective Communication (unfortunately the book is now out of print). Others were YouTube lectures covering topics such as cross-cultural communication.
Another of the in-class activities was to discuss a variety of articles related to statistical consulting and practice. Each student either chose from a list of articles I provided, or found an article on their own that was relevant (and preapproved by me). The student then presented the article and led a discussion of the contents. The third in-class activity was a discussion of the client projects. Periodically, each team would present their progress and obtain feedback from the rest of the class. One requirement was that each of the team members participate in each of the presentations. At the end of the quarter each team gave a final presentation with the client in attendance.
In addition to the activities described there were biweekly homework assignments, and at the end of the quarter each team submitted a final written report. The homework assignments were either directly related to the client projects, or were based on additional assigned readings or videos.
There are other models for a statistical consulting or capstone course. A search of various university statistics department websites often will turn up course notes and syllabi for these courses. One model for an undergraduate consulting course was presented by Tisha Hooks and Chris Malone from Winona State University; found at https://www.causeweb.org/cause/sites/default/files/ecots/ecots12/posters/hooks_slides.pdf. Winona State was one of the first undergraduate institutions to offer a major in data science.
Students who are planning to work as data scientists after graduating should have experience participating in the full statistical inquiry cycle. Ideally these experiences would be available to students at all levels, including community colleges and undergraduate programs, as well as graduate degree programs. The MSSP program at Boston University provides an excellent example of how that can be done for master’s degree programs. As a profession, statisticians and data scientists should continue to develop models and methods to ensure that there is participation in these kinds of activities at all levels of higher education.
I would like to thank Rob Gould and Wesley Johnson for helpful discussions in the preparation of this commentary.
Jessica Utts has no financial or non-financial disclosures to share for this article.
Chickering, A. W., & Ehrmann, S. C. (1996). Implementing the seven principles: Technology as lever. AAHE Bulletin, 49(2), 3–6. https://www.aahea.org/articles/sevenprinciples.htm
Derr, J. (2000). Statistical consulting: A guide to effective communication. Duxbury Thomson Learning.
MacKay, R. J., & Oldford, R. W. (2000). Scientific method, statistical method and the speed of light. Statistical Science, 15(3). 254–278. http://doi.org/10.1214/ss/1009212817
Stallings, J. (2014, February 1). Type IV errors: How collaboration can lead to simpler analyses. Amstat News. https://magazine.amstat.org/blog/2014/02/01/mastersfeb2014/
Wild, C. J., & Pfannkuch, M. (1999). Statistical thinking in empirical enquiry. International Statistical Review, 67(3), 223–248. https://doi.org/10.1111/J.1751-5823.1999.TB00442.X
Wild, C. J., Utts, J. M., & Horton, N. J. (2018). What is statistics? In D. Ben-Zvi, K. Makar, & J. Garfield (Eds.), The international handbook of research in statistics education (pp. 5–36). Springer. https://doi.org/10.1007/978-3-319-66195-7_1
©2021 Jessica Utts. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.