Skip to main content
SearchLoginLogin or Signup

The View From Four Statistical Agencies

An interview with Peggy Carr, Hubert Hamer, Emilda Rivers, and Spiro Stefanou by Nancy Potok
Published onApr 02, 2024
The View From Four Statistical Agencies

You're viewing an older Release (#1) of this Pub.

  • This Release (#1) was created on Apr 02, 2024 ()
  • The latest Release (#2) was created on Apr 11, 2024 ().


This special issue fireside chat features a panel discussion among Peggy Carr, Hubert Hamer, Emilda Rivers, Spiro Stefanou, and Nancy Potok, focusing on the utilization of data within federal statistical agencies. The conversation delves into the significance of understanding how data are used by various stakeholders, ranging from researchers to policymakers, and the public. It emphasizes the value of tools and platforms that provide insights into data usage patterns, aiding agencies in resource allocation and decision-making. Challenges such as survey response rates are acknowledged, alongside opportunities for innovation and collaboration. The discussion also addresses staffing needs and strategies for institutionalizing the incorporation of user information into agency operations. Offering advice for other agencies, panelists underscore the importance of passion, effective messaging, and framing conversations around the value of data usage insights. Overall, the dialogue highlights the critical role of analyzing data usage in enhancing the impact and effectiveness of statistical agencies’ work, while providing practical guidance for implementation.

This interview is part of HDSR’s Conversations with Leaders series.

HDSR includes both an audio recording and written transcript of the interview below. The transcript that appears below has been edited for purposes of grammar and clarity.

Audio recording of the interview.

Nancy Potok: [00:00:01] Welcome to the Harvard Data Science Review’s special issue fireside chat. I’m Nancy Potok, your moderator for today’s discussion. We’re welcoming four statistical agency heads to discuss their experiences piloting the Democratizing Data Search and Discovery Platform, with a particular focus on practical applications and implementation considerations. With us today are Peggy Carr, Commissioner of the National Center for Education Statistics (NCES) at the U.S. Department of Education; Hubert Hamer, Administrator of the National Agricultural Statistics Service (NASS), part of the U.S. Department of Agriculture (USDA); Emilda Rivers, Director of the National Center for Science and Engineering Statistics (NCSES) at the National Science Foundation (NSF); and Spiro Stefano, Administrator of the Economic Research Service (ERS), also part of the U.S. Department of Agriculture. First, I really want to thank you for taking time from your busy schedules to be here to talk about this important topic.

Spiro Stefanou: [00:01:08] Great. Thank you for having us. Appreciate it.

Nancy Potok: [00:01:11] Our format today is kind of a panel discussion. Conversational. So, let’s jump right into the discussion topics. I’ll open this up to all of you. We’re focusing today on how agencies that provide important data sets for policymaking can identify and better understand their users. Title 2 of the Foundations for Evidence-Based Policymaking Act [of 2018, Pub. L. No. 115-435, 132 Stat. 5529], otherwise known as the Evidence Act, mandates that agencies identify and get feedback on their data from users. Would you explain why identifying and connecting with users of your specific data is important, and what are the benefits to both the users of your data and to you? Let’s start with you, Spiro, and then you can each add your own take on this.

Spiro Stefanou: [00:02:05] Great. Thanks for the opportunity here, for this question. We are participating in the Democratization Data project, looking at two particular data sets. One is the Agricultural Resource Management Survey. This is a survey of over 30,000 farms that our partners at NASS (National Agricultural Statistics Service) conduct for us. And this is a survey that has particular value to a wide range of stakeholders. For USDA policy, it’s tracking the drivers of farm revenue changes, the price changes, quantity changes by commodity type. It also looks at drivers of production costs. It also looks at the distribution of farm income by farm size, and this has become a particular issue from a Senate request that came recently that wants to look at this asset distribution by farm estates as they start crafting farm tax-related policy. We also collect a lot of information on off-farm income as well, to kind of see how the whole dynamic is working in the farm sector. And other agencies, like the Risk Management Agency, use this to set premiums for crop insurance. Another data set is the Rural-Urban Continuum Codes, which just forms a classification scheme that distinguishes metro counties by population size and nonmetro counties. And it’s a finer distinction that’s being used by the census. And this kind of a data set is used by offices to determine the eligibility of rural development program funding by USDA, HHS (U.S. Department of Health and Human Services), and other agencies. For example, the Federal Office of Rural Health Policy uses this information to state a product to determine healthcare units’ eligibility for special funding, given transportation challenges. And researchers from a wide range of disciplines use it outside of agriculture. Particularly, public health literature focuses on this, looking at supporting research investigations into the incidence of health and well-being disparities geographically, and the impact on equity across rural and urban regions.

Nancy Potok: [00:04:26] Yes, that’s quite a bit. Well, Hubert, you’re also at USDA, and I know a lot of your attention is focused on the data collection end. How about you? What is important to you about this information?

Hubert Hamer: [00:04:41] I’ll just say at NASS, we’re the data collection arm at USDA. We disseminate about 450 reports on an annual basis on all different aspects of agriculture. And in addition to that, we can collect the Census of Agriculture data every fifth year. I just want to jump in and say feedback from our data users is critically important to help us balance our limited resources and keep our programs relevant. Our work is cyclical in nature, and we collect information and release data in the form of aggregated official statistics, and our data users convert it into something that’s meaningful for their purpose. Often to serve the same individuals that we collect the information from on the front end. So, it’s important that we surface and understand the cycle so that we can convey a data-driven message to decision makers and also to our respondents, because this data does indeed come back to them to benefit them individually and in their communities and other cases. NASS also surveys these data users on a periodic basis via the American Customer Satisfaction Index program. We have that administered by a third party. Here, data users can not only score us on their satisfaction with our products and services, but they can also provide contextual feedback on anything they would like to share with us. In addition to that, we hold two very large public data user meetings on an annual basis to share information about our program changes, new developments, and to stand directly in front of the data users and get feedback from them. So, it’s very important that we stay connected for a number of reasons to keep our resources aligned with our program delivery.

Nancy Potok: [00:06:34] That makes a lot of sense. Emilda, I know you have a little bit different perspective on this, coming from NSF. You want to weigh in?

Emilda Rivers: [00:06:43] NCSES, the National Center for Science and Engineering Statistics, is at the National Science Foundation, where basic research is funded. We often need to be able to leverage our position in terms of our congressionally mandated reports. One is on the science and engineering indicators; that’s done for the National Science Board, and uses not only the 16-survey data—data from the 16 surveys that we conduct—but also surveys from other agencies that are pertinent to the science and engineering labor force, R&D performance, R&D funding, and also the education of our workforce. So, it’s very important for us that we’re able to talk about where our data are being used and how they are meeting the needs of not only NCSES, but the National Science Board and the National Science Foundation. What these data are going to do then is with these 16 surveys that we produce—we have about 48 analytic reports annually, and we have about a thousand data tables in addition to our data tools and our data profiles. What we’re looking for is to understand more about data usage within NCSES’s data, so that we can refine some of the products that we have to produce with limited staff. So, there’s an ‘investment understanding’ that we need for the decisions for our surveys, our databases, and our data products. How are data being used, when, and by whom? These data can provide a lot of insights into the type of audiences that both we reach and those that we don’t. So, this democratizing data is very important for where can we increase our usage base, but also understanding where are people really not getting what they need from the tools that we have. We want to be consistent in how we do that for our communities. They help us understand where we need to link to other sources. And it’s not only the sources within the United States and the Federal Statistical System, but there are also sources internationally, as our Science and Engineering Indicators report is often in a global context. When we think about researchers, then, and their need for data, we want to use this tool to understand what nonresearcher needs look like. They will help us to inform where should we target some of our workshops and our outreach. Where should we target conferences? And the conferences are also not those that we sponsor, but what conferences and outreach should we be doing with our communities? Another aspect that I’ll just mention, and we’ll talk about later, I’m sure, is how critical having this information is and the importance of statistical data for evidence building, particularly now where NCSES has been charged with standing up a National Secure Data Service (NSDS) Demonstration Project.

Nancy Potok: [00:10:00] Yes, that makes a lot of sense. Peggy, I want to go back to a point that Emilda brought up here, and that was this idea of outreach and communications. I know it’s one thing to try to reach your users going through some initial workshops, or maybe focus sessions, to get feedback. Hubert, you mentioned a very extensive outreach program to the users that you have. But, Peggy, I want to hear from everybody, but maybe you could start out by saying what some of the challenges are with sustaining collaborations with users. I know you’ve given this a lot of thought. These are collaborations where you’ve worked with other agencies, where you may be sharing your data or linking data with them, and also collaborations with outside researchers. Are there some ways or some tools that the collaborations can be maintained, that you can grow them?

Peggy Carr: [00:11:01] Thank you for your question, Nancy. You know, sustainability of collaboration cannot be ad hoc or just done on a spontaneous basis. You really need a strategy for outreach, and it needs to be systemic, consistent, and ongoing in ways to keep the lines of communication open. This perspective pertains to all public users, those in every facet of our community and also other federal statistical agencies as well. We’ve made great strides. We believe in bringing in data from outside resources to inform the condition of education, which is our major mission. Our partners across the government, many of them sitting around our virtual table here, utilize NCES data products to support their own missions. But we quite honestly don’t know all the times when that is happening. We believe our Digest of Education Statistics, which has thousands of tables, and some of our flagship data collection, such as the National Teachers and Principals Surveys and others, are used by partners. But often we find out only because we’re reading their products or their services or working with them in a collaborative way. That’s just not good enough. We need a systemic way to capture these interactions with our users, both from a policy level on just data, generalized data, and a utility level. That is what is needed: a tool that has tentacles that extends in all directions for all different types of users, and then back again to us as the statistical agency. And this tool that we have been talking about today, well, I think it has potential as being that vehicle. So, capturing this information is important, but more importantly, it has to be scalable and sustainable, and it has to be cost efficient.

Nancy Potok: [00:13:14] Yes, that makes a lot of sense. Let me throw that open. Hubert, Emilda, Spiro, do you want to add to that?

Hubert Hamer: [00:13:22] Yeah, I’ll jump in just for a bit. I’ve been in agriculture, this business, a long time, and a lot of this knowledge has been grown through experience. Agriculture is a small circle. You get a chance to work with a lot of different individuals and institutions over time. But this Democratizing Data platform is our attempt to provide the same knowledge to someone who just comes to the agency by giving them a central area to explore these usages and possible linkages. This gives our staff in the field, new and tenured, a leg up when they’re visiting a university or attending a field event. They can quickly get a list of the affiliated researchers at the university or in that area, and then have a more informed conversation and know a little bit more about these data users. Think about how this interaction would be if our staff are approaching folks with an understanding of that person’s work and how that person is utilizing the data and the work that we perform. You talk about these relationships, but that’s more of a personal touch, and it’s difficult to apply without years of experience in that area. So, I think by having this tool, it will give everyone an advantage to be able to learn more about what’s happening with our products and services, and again, to give our staff another tool that they can use to be more prepared for their interactions with data users.

Nancy Potok: [00:14:55] Emilda, I see you shaking your head ‘yes.’ You're agreeing with what Hubert is saying?

Emilda Rivers: [00:15:01] Absolutely. When I think about the conferences and the workshops, there’s so much excitement there. And people are talking about it and we have these great ideas, and what do you know, we have to go back to wherever we came from. And sometimes we don’t keep those interactions. We don’t have access to that workshop report that maybe—I won't say ‘sits on the shelf.’ It doesn’t, but sometimes it does. And so what this democratizing data tool does is it provides on-demand access. It’s something that’s there when people need it. When people need to connect, they have a community they can go to other than those that might be in more of a silo type of an environment. This is what I see as the benefit here. The researchers can move out of the vacuum or the silos. The people that want to use the data, they start seeing that other people are interested in the data as well. And so we’re breaking down and addressing some challenges that exist. It also provides a transparency. I just wanted to point that out as well. When you have people who are so very niched in their areas, communicating that value can be difficult, but the tool allows you to see that rapidly.

Nancy Potok: [00:16:20] Yes. Well, Spiro, you actually held a workshop this past spring. You invited some agricultural economists and other power users of your data. What kind of feedback did you get from the participants about your data, about the platform? How are you responding to the feedback? And do you think this kind of also helps build community and collaboration?

Spiro Stefanou: [00:16:47] So we had a workshop. We had just over 20 participants engaged, and we had worked with your (Democratizing Data) group head and collaborators from the University of Maryland to develop a platform. The goal was to get community input on the usage statistics that were being developed in response to the Evidence Act, and these participants had the opportunity to network with others. This is one of the points that Emilda raised as well. So, they had hands-on time working in small groups, providing feedback, using dashboards, Jupyter Notebooks, and an API (application programming interface). And they had a lot of information that they could access in terms of what does this community look like and who are part of this community. Now, one thing I'll mention is we always tend to focus on users who are from R1 universities, but there are lot of lot of others who we aren’t reaching. And I would add that as kind of one of the sustaining collaboration challenges. There are those who take our work, they digest it, they put it out in a form in their communities of interest, maybe extension outreach. State and local governments can have access challenges that range from computing familiarity to software platforms and just plain time. And so this created what Emilda often [calls] “the sandbox,” for them to play in, to discover a way to build a data platform that makes sense to them. So, one of the things we had is everyone said they want more data to play with. And to combine data with other sources like GIS (geographic information system) data and expand the corpus of publications beyond just a scholarly network. And how to engage the community to improve data quality. They talk about input on traceability and fingerprinting of data. A lot of human-computer interaction to update the corpus of data that we can start to search in terms of the literature and engage MSIs (minority serving institutions) through targeted workshops and dashboard updates. And that's something I want to emphasize: this project lowers the barriers and the cost of access so we can get those users who we haven’t been able to access very well, and maybe we haven’t been looking for them very carefully, to be able to jump in and play. And so when we talk about DEI, I’m talking now about the ‘I’ and the ‘E’ part, being included in this network and then being able to access the network. So not just watching the playing, but getting on the field and getting work done. Those are part of the components we have, so lots of different stakeholders. I think this is one of the surprises to me, is we need to be a lot more expansive in the pool of stakeholders we’re looking to reach out to.

Nancy Potok: [00:19:57] Yes. Wonderful points. You know, I love podcasts, but one of the drawbacks is that the people listening can’t see everybody nodding their heads in agreement with you. But I’ll vouch for that. Everybody is nodding in agreement. So, Emilda, let’s come back to you for a minute. You mentioned before some responsibilities that you have on behalf of the whole federal statistical system that you’ve taken on at NCSES. It’s a lot of responsibility. You’re the project manager for the single application portal that was mandated by the Evidence Act, and that’s what the researchers use to request access to the protected microdata from the statistical agencies. Spiro mentioned some really good points about increasing access. And in addition, the CHIPS Act [CHIPS and Science Act of 2022, Pub. L. No. 117–167, 136 Stat. 1366] gave NCSES responsibility for piloting the National Secure Data Service. That was a service first recommended by the Commission on Evidence-Based Policymaking. And it was further elaborated on by the Advisory Committee on Data for Evidence Building, which of course, Emilda, you chaired. So you’re very familiar with those recommendations. Given your perspective, then, and these responsibilities that you’ve been given, how do you see all the pieces fitting together, both in the short term and then in the medium term, in providing infrastructure for this whole data ecosystem? And what is your role as head of NCSES for the agency’s role in making the vision a reality, so that we really are realizing the goals of the Evidence Act?

Emilda Rivers: [00:21:40] Thank you, Nancy. It is a lot. And I’ll just start by saying that it is very daunting, but it is exciting and it is needed, because I am a true geek for the Federal Statistical System. I believe that having seamless services is vital to expanding this access, as Spiro talked about. You know, the ‘I’ for included and the ‘A’ for access—that’s the foundation of what we’re trying to do in terms of the transparency. So when we talk about the standard application process, we look at that as the front door. That’s the first place where people can discover what types of federal data exist across this system of ours, and how they can apply for that access. I think that’s very exciting, but there might be some things they want to know before they even get to that point. And that’s where the democratizing data tool can be very beneficial. So you start to see maybe a researcher that wants to, say, explore a topic on usage statistics. Who’s using it? How can they apply for it? They may also be interested in learning what other researchers are doing on this topic. So in the short term, we have a tool that people can use to look across at least these four agencies and hopefully more as we are setting that example to move forward, where they can use these usage statistics to make decisions about applications they might want to put in the standard application process. So this is real. It’s evolving and it’s value added for that type of research. Now research can be very intensive, as we know. And to the extent that this can help people to scope out projects and look at what types of data are available, now they can start to come to the National Secure Data Service demonstration project and present ideas for linking data in ways that they haven’t thought about before. This really already is a foundational system for expanding access and doing so in a seamless way. You don’t have to know Peggy intimately, or NCSES, or agriculture, NASS—you only need to know that there’s a one-stop place you can go and start to access it. So this makes it very user oriented. And that is, I think, the role and the vision that we have: expanding who are users are, not just researchers. One area we really haven’t talked about is the role that state and local governments can play in the usage of these data, and I think these tools also start to help give them a foundation for where they can go and the types of things they can do in partnership and collaboration with the federal government. So there’s a whole community that can be accessed in the short term through these tools.

Nancy Potok: [00:24:32] That’s quite a vision. And I’ll put a little plug in here too, for, what you’re piloting for the NSDS through America’s Datahub. We’ll put the link in to America’s Datahub so people can see what some of those opportunities are that are there and what that vision is in terms of building out a whole infrastructure. Peggy, let’s go back to you for a second. Education data is in high demand, and particularly the longitudinal data that illuminate so much about the effect of education on lifetime outcomes for Americans. We hear a lot about that in the public policy arena, linking education, jobs. It’s a really hot topic, I would say, in the policy arena right now. And you’re kind of in the center of it with data and information. And we know that other agencies like the Bureau of Labor Statistics, for example, uses IPEDS (Integrated Postsecondary Education Data System), one of your data assets, to inform, for example, the price of postsecondary education. Emilda at NCSES is using IPEDS along with census and other data sources to better understand the condition of STEM education, which is a big topic for NSF and NCSES. So where do you, Peggy, see the usage data going in terms of providing value to these efforts? That is, how can the usage data help understand how these linkages between agencies—and as Emilda pointed out, with the states, which is very important—can increase the value of the data for evidence building?

Peggy Carr: [00:26:17] Good place to start, Nancy, that subject of collaborating and working collectively toward this mission is our goal here with the states. We have that state longitudinal program that’s so well sought after and used by almost every state. Every state except for one—I won’t call them out—has a grant that does exactly what you just described. Longitudinal data from pre-K to way past, postsecondary, into adulthood. But those are their data; we don’t actually have ownership of those data. We’re providing the platform in terms of resources and support and technical assistance to make it happen. But we would love to work more collaboratively with them through a tool just like this. But they have to see the value in it. And there, I think, is an opportunity for us all. This is the innovation, I think, that we are looking for in utilizing this tool and moving forward. And it’s not just NCSES. Yes, Emilda, we’ve been working together with the IPEDS data for a while but, Spiro, you as well in the work that you do with rural, urban, your data products, because we are in that space as well with rural-urban classifications. Often you’ll see it in the literature, and it clearly says that we need to be doing more linking than we are doing. Some opportunities there. And that’s why I think this kind of tool has such value. We see what people are doing in their own research and their own exploration of our data. It gives us a clear path forward of what we should be doing to make it easier and more accessible for them. And then there are researchers that are just evaluating hypotheses, particularly since COVID, about things that we should be linking that we perhaps hadn’t thought about. We just did a very interesting analysis in which we have classified schools and school districts based upon preexisting conditions prior to COVID, and this was suggested in the literature. We took that hypothesis and we explored it, and it looks as though when things were already difficult and challenging for communities, they fared worse during COVID than if that were not the case. But it was through this discussion and the literature, the media, and those who are exploring this in a hypothesized way but not empirically based, that we got this idea. This is an important tool for a number of reasons if you think about it at that level: what we should be linking that we’re not linking, what type of analysis we should be conducting that we’re not conducting, the value can be seen and what we should be doing as a statistical agency, but what we should also be doing to support the needs of our users.

Nancy Potok: [00:29:39] I think those are really excellent points. And some of this points also to more innovation. Not only data linkages, which are key, but also for agencies like NASS that are involved in a lot of primary data collection, using innovation to kind of overcome the dropping response rates to the surveys. And, Hubert, I have to say, you are one of the most innovative agency heads I have ever encountered when it comes to your data at NASS. You’re always kind of on the cutting edge, looking for new ways of gathering the data, looking at response rates. I know the NASS is really focused in on how providing the usage data might be helpful in improving the response rates to surveys, providing information that’s meaningful to survey participants such as farmers. But not only that—sort of personalizing messages and information available to the interviewers to the Agricultural Extension Service and others in order to encourage participation and raise response rates. So what do you kind of see as the next steps here?

Hubert Hamer: [00:30:58] Thank you. That’s a bunch there. But people generally connect better when something they care about is the conversation piece. So farmers and ranchers are very passionate about their businesses. And you go out and you visit with them, they’ll show you their operation, they’ll give you a tour. So you really get past just talking about completing the questionnaire. And sometimes because we deal in agriculture, their value-added products on the farm, food, and we might have to partake in that again to show that we’re very concerned about our relationship with the farmers and ranchers. So we do some quality control out there also. But we discussed it a little bit earlier: data users add value to the official statistics that we put out. What we’re attempting to do now is find those value-added materials that are closest to the producer’s passion and their business. And this is some of the work that through working with the extension service or commodity groups that they follow. The real challenge here is the disparate nature of those works. We’re finding it very challenging to have a scalable approach to bringing those materials together, along with what currently exists on the Democratizing Data platform. And we’re not discouraged by this challenge, but we’re continuing to explore ways to make our products easier to cite so that in turn, we can find those usages more easily. When we’re successful, we’d like to do some linking between demographic things like farm type, locale, or maybe some deliberately collected topics that are of interest to that specific producer. You know, we might have articles with specific topics that we found in their area that match the specific interests of that potential survey respondent. So how close can we get to the impact our work has had on those with very specific interests of the producer? These are some of the things that we’d like to explore. And again, when you can show and demonstrate that the time and effort that they take to provide information to us is coming directly back to them and their communities to support the local needs and in some cases have resources allocated based on those data, it makes our job a lot easier.

Nancy Potok: [00:33:29] Yes. I’ll put in another plug here: elsewhere in the special issue, we’ve got a technical paper that is devoted to some work that you’ve done in trying to identify which of your data assets are, for example, in the Agricultural Extension Service reports that may not have precise citations but they’re using the data. I think that’s a really important paper for people to look at, but it’s also an important thing to explore, looking at these kinds of secondary sources that don’t have the standardized citations back to the data. How are you going to find those users? And those users may actually be more numerous than the researchers. So if people want to know more about that, I recommend that you take a look at the paper that really does a deep dive into what you did in that area. So I ask all of you this question now. Now that you’ve all kind of been through a round of the pilot program and you’ve got initial data and dashboards on who your users are, I wonder if you have any advice for other agencies that also are considering how to get a fuller picture of how their data are being used and connecting with users, what you’ve pointed out is quite important. So, in particular, what would you advise in terms of the type of staff that you need within your own agency not only to get started, but to sustain the effort? How would you institutionalize the use of the user information and incorporate it as a routine part of your decision-making? A couple of you have mentioned using this for making decisions about resource allocation, about requesting resources, and other types of important planning that you do. So let’s go around the panel and just get a little short bit of advice from each of you to close out our podcast today. Spiro, let’s go back to you.

Spiro Stefanou: [00:35:34] Well, thanks. Lots of advice to give. One of the things I’ll focus on is taking the perspective of being a data-driven, evidence-building agency. We produce and we put a lot of resources into the process, we generate outputs, and we just shouldn’t be throwing them over the wall to folks. We need to take that next step to convert outputs into outcomes. The policy community and other stakeholders take those outcomes, and they make the impact through their value. So one of the things I focus on is how do we assess the value of publicly provided data. We put a lot of time and a lot of resources into it. And as you mentioned, we’re missing measuring a lot of the value that we are creating. There are costs and risks associated with data production and distribution. We all know about the risks of disclosure, reidentification, as well as reputation to the agency. When you talk about costs, you talk about reward. What’s the utility? We just have a lot of stories of how important our data are and how much value they add. How do we start to document that in a more systematic way? So we’re working on a project right now that looks at evaluating the value of publicly available data sets, and the potential value in a money-metric framework of free public access to these data. And so we’re starting out with the proof of concept, developing the basic methodology with these data examples that I mentioned earlier. And we’re looking to see what we can put together here as kind of a starting framework for measuring the potential value.

Nancy Potok: [00:37:20] Great advice. What about you, Peggy?

Peggy Carr: [00:37:22] Well, you know your summary of some of the obvious benefits, I think, go without restating, because they’re obvious: resource limitations. We can use it to have conversations with congressional staff who want us to justify, for example, continuing a particular study. And so the utility is obvious. Those are some of the obvious ones that I think we’ve already talked about a little bit. But then there are a couple other ones that I’m not sure have emerged in our conversation today: our staff. We are a fairly large organization in terms of our portfolio, but not in terms of our staff. We have a very small staff, and often they feel a little overworked, but when they see the value of what they’ve been doing and these types of tools, these tools that we’re talking about here, and they can see how others are using this, there’s a certain pride that they can bolster. They can be really, really satisfied that what they’re doing is really making a difference. That was one of the surprising outcomes of participating in this project: how prideful people were about their work through the summarization of this information and these tools. And there’s a second one, and it’s around recruitment. You mentioned response rates and how we’re all so struggling with response rates—NCES, of course, is no different. So when we’re talking to schools and school districts, it’s nice to be able to say, ‘this is how the data are being used.’ The apple pie approach isn’t always very effective. People want to see that you really aren’t just giving them the routine reason why you should participate: because you are a good citizen. They want to see that you’re actually making good of these data. So those are two reasons other than the very obvious budget and utility and feedback that, of course, we all see as valuable pieces of this work.

Nancy Potok: [00:39:33] Yes. The utility jumps right out. Emilda, if you were giving advice to another agency who wants to use the platform, what would your advice be?

Emilda Rivers: [00:39:46] Please just do it. That would be the first bit. Because the more agencies that we have using the tool, the more valuable it becomes in terms of understanding more about the federal system and the data ecosystem that I often talk about in terms of what data are there and how are people using it. How are we meeting their needs and how are we not meeting their needs? And so we may not fully understand. We get a lot of information from the four of the agencies that are there. But just imagine having at the public’s fingertip the data across the Federal Statistical System to be able to speak to impact—because that’s the question that I often am given at the National Science Foundation, where impact is a very important part of how they describe their research and what we’re deciding to find. It’s what is the anticipated impact. And so we’re busy meeting the mission, but you can get under the hood with this type of tool. You can understand more about where you need to allocate your resources. And I think another link that Peggy mentioned has to do with the staff. You can get to know where your staff maybe needs to be strengthened and to focus. They get a lot of information from requesters. Those requesters may be asking about data assets, but they are also asking about narratives. So tell me, you tell me that there are 1.1 million STEM researchers in the United States. What does it mean? What are they doing? Where are they going? Why do we make such an investment? So this is what I would say to agencies: please just do it. Please just get involved. Please give us an opportunity to communicate value in this way with the public.

Nancy Potok: [00:41:33] Right. Okay, Hubert, last word to you.

Hubert Hamer: [00:41:37] Thank you. Nancy, I’m going to always say it’s all about the people. Find someone who’s passionate about the mission of the agency and driven to surface it for everyone to see. It’s also important to message and frame the conversation around what the platform is and what it can be, as opposed to things that it isn’t. What I mean by this is it will never be a complete collection of all the usages of your agency’s data, but it can and will serve as a valuable tool for all things that are and will be, to help improve response rates and data quality. So I’m just like the rest of them; I think people should get involved. Any time you can have a tool that allows you to dig into the data and really find out what’s behind it, I think you have a winning formula. So, get involved and select people who are very passionate to be on your team.

Nancy Potok: [00:42:33] Great. Okay. Well, Peggy, Emilda, Hubert, Spiro, thank you so, so much for sharing your insights and all of your experience. And I know that the HDSR readers and listeners really appreciate you taking the time to be on this panel. And I appreciate it too. So, for all the listeners, please be sure to check the podcast show notes for additional information, and all the relevant links will be in there. Thank you so much. This is Nancy Potok.

Disclosure Statement

Nancy Potok, Peggy Carr, Hubert Hamer, Emilda Rivers, and Spiro Stefanou have no financial or non-financial disclosures to share for this interview.

©2024 Nancy Potok, Peggy Carr, Hubert Hamer, Emilda Rivers, and Spiro Stefanou. This interview is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the interview.

No comments here
Why not start the discussion?