Skip to main content
SearchLoginLogin or Signup

Future Shock: Grappling With the Generative AI Revolution

Published onMay 31, 2024
Future Shock: Grappling With the Generative AI Revolution
·

In the jarring opening scene of the 1972 documentary Future Shock, a film directed by Alexander Grasshoff and based on the classic study by sociologist and futurist Alvin Toffler (1970), the blurred silhouettes of a man and woman walk side by side down a serene country road. As the two figures amble toward the camera, a series of dystopic cutaways of police and military violence, physical trauma, and civil unrest are interposed. Meanwhile, atonal synthesizer music and sounds of human screaming blare in the background. The camera shot zooms in on the fast-approaching figures, and an abrupt shift from fuzzy soft focus to sharp shallow focus reveals the soulless but wide-eyed glare of two blond-headed robots. Orson Wells, the film’s narrator, then appears and delivers the lines that contain the movie’s leitmotif:

Future shock is a sickness which comes from too much change in too short a time. It’s the feeling that nothing is permanent anymore. It’s the reaction to changes that happen so fast that we can’t absorb them. It’s the premature arrival of the future. (Grasshoff, 1972, 03:11)

Just over a half-century ago, Toffler (1970) coined the term “future shock” to capture the widespread societal dislocation affected by the rapid advent of the digital revolution. On his account, the continuous and accelerating changes brought about by this technological transformation were causing a bewildering overhaul of familiar forms of life and a “shattering stress” in the lived experience of individuals “subjected to too much change in too short a time” (p. 4). Toffler’s concerns were rooted in how a society ill-prepared for such sudden changes could not cope with the accelerating pace of the innovation-induced demolition of existing human institutions, norms, and practices, raising the real prospect of a “massive adaptational breakdown” (p. 4).

Since the launch of ChatGPT in November 2022, the explosive rise of generative AI (GenAI) has raised urgent questions about the extent to which society has become afflicted with future shock. Evidence of “mass adaptational” pressures abound—from unprecedented strains on standards of academic integrity and learning in higher education and widespread distress about the harmful effects of machine generated misinformation, disinformation, and propaganda on the integrity of information ecosystems and democratic processes to almost instantaneous system-level impacts on labor dynamics across content-creation-focused sectors like the marketing, advertising, publishing, graphic design and gaming industries, journalism and media, software development services, the music business, and photography and film production.

Likewise, the flooding of sociodigital space with empathy-simulating chatbots powered by commercial-grade large language models (LLMs) has put immediate pressures on everyday norms of interpersonal interaction and basic rights to psychological and emotional integrity. Such conversation agents can deceptively lead people to believe they are interacting with sentient and caring companions, exposing users to potential political and consumer manipulation, fraud, misplaced emotional attachment and overreliance, loss of agency, and dignity violations.

Much like the haunting approach of the blond-headed robots in the Future Shock documentary, such LLM-powered conversational agents have come onto the scene as a difficult and disconcerting admixture of familiarity and otherness. On the one hand, they have predominantly been trained through reinforcement learning from human feedback to exploit human predispositions for interpersonal connection by mimicking the commonplace intimacies of kinship-building behavior, fellow feeling, and interactive rapport. They have been designed with all the reassuring bells and whistles of anthropomorphic familiarity. On the other hand, behind the facade of the friendly chatbot interface, they are entirely alien amalgams of unrevealed digital, physical, and human infrastructure. They are computational Frankensteins composed of opaque and inaccessible ‘black-box’ algorithmic architectures; wired with intricate networks of software patches, guardrails, and automated content filters; trained on unfathomable web-scraped data sets that harbor a vast array of secreted toxic and discriminatory content; developed and run in environmentally costly data centers and on carbon-hungry cloud computing servers; dependent on hidden armies of ghost workers (often from migrant communities or the ‘Global South’) who annotate harmful content; and steered by large, market-moving tech corporations which operate in the shadows of global technoindustrial supply chains with very little transparency and even less accountability.

As critics have closed the distance between the alien machinery of these all-too-concealed sociotechnical and political economic substrata of the GenAI innovation ecosystem and the tranquilizing dynamics of anthropomorphic familiarity just described, the harsher realities of future shock have increasingly come rushing to the surface.

In this special issue, “Future Shock: Grappling With the Generative AI Revolution,” we initiate Harvard Data Science Review’s arduous but imperative journey to explore the broad spectrum of questions raised by recent advancements in foundation models and generative AI tools like ChatGPT, interrogating, in particular, the extent to which these advancements are presenting society with dangers of future shock. To what degree, and how, is the accelerating pace of the generative AI revolution putting novel, and potentially unsustainable, pressures both on accepted norms and practices of scientific research, teaching, scholarship, and academic publication and on broader social, cultural, economic, political, and legal structures, dynamics, and institutions? Does the rapid proliferation of generative AI applications represent an inflection point in the evolution of data science and AI and in the scope and scale of their societal impacts or is this sense of ‘revolution’ itself merely a by-product of the hype created by tech evangelists and their critics? Has the commercial race to bring generative AI tools to market (as evidenced by the recent actions of OpenAI, Microsoft, Meta, and Google, among others) exposed a perilous gap between the accelerating pace of present-day technological expansion and the development of novel sociotechnical, ethical, legal, and regulatory vocabularies, which are sufficient to provide adequate responses to the basic practical questions that this expansion raises for the society of tomorrow?

Wide-Ranging Reflections, Contemplations, and Explorations

In our open call for submissions for this special issue, we stressed that that we would prioritize a multi-lens and interdisciplinary approach, seeking submissions that instantiate state-of-the-art research from a wide range of academic specializations while remaining accessible to nonspecialists and scholars from other disciplines. We particularly welcomed submissions along two concentration tracks:

  1. Clarifying the nature and limitations of foundation models, large language models, and generative AI applications. We asked that these articles delve into the scientific and technical dimension of foundation models, LLMs, and generative AI applications, focusing on making clear their statistical, mathematical, and data scientific underpinnings and their conceptual strengths and weaknesses. We also asked that submissions in this track aim to sharpen an understanding of these methods for data scientists and interrogate what is really happening in the mathematical machinery of foundation models, LLMs, and generative AI applications, both in the theory supporting them, and in the practice of using them in the real world. Areas of focus could include:

    • strengths and limitations of the transfer learning and self-supervised learning techniques and the transformer/attention-based architectures used in the design of foundation models and LLMs

    • challenges and opportunities related to the design of multimodal foundation models and the linkage and integration of text data, structured data, and image, audio, and video data

    • challenges and opportunities related to the multitask interaction of generative AI tools with lived environments including interaction with human agents and other automated systems in myriad social and cultural milieus

    • challenges and opportunities related to the integration of multistep or chain-of-thought reasoning techniques and ground-truth- and reference-checking mechanisms into foundation model architectures

    • challenges and opportunities related to the integration of foundation models and sequential decision-making techniques (including application of reinforcement learning, planning, long-term memory, imitation learning, and so on)

    • challenges and opportunities related to emergent abilities in foundation models based on zero-shot or few-shot prompting

    • challenges and opportunities related to the interpretability and compositionality of foundation models; challenges related to the prioritization of the predictive accuracy of foundation models over causal explanation and understanding of the rationale underlying their outputs (i.e., tension between priorities of ‘engineering’ and ‘basic science’); strengths and limitations of the application of current post hoc explainability techniques to foundation models

    • strengths and limitations of current performance evaluation and benchmarking regimes for foundation models and for domain- or task-specific generative AI tools; challenges and opportunities related to applying current evaluation criteria for AI/machine learning systems (e.g., safety, security, reliability, robustness, fidelity, fairness, bias mitigation, and training/operational efficiency and environmental impact) to foundation models and their tailored applications

    • challenges related to the use of public and nonpublic large-scale data sets for the training of foundation models; challenges related to the illusion of the ‘veracity of volume’ and the reliance on data quantity to solve problems in research and model design; challenges related to scaling appropriate methods of data cleaning, curation, and engineering to ensure bias mitigation and redress of harmful or discriminatory content

    • challenges related to tendencies of foundation models to ‘hallucinate’ or generate nonfactual content and mitigation methods

    • challenges related to the reproducibility and replicability of the results of foundation models, LLMs, and generative AI applications (given both intrinsic model limitations and lack of accessibility to underlying code, training data, and model details for reason of model producers’ claims of IP protection, security concerns and computational limitations)

  2. Exploring the wider societal risks and impacts of foundation models, LLMs, and generative AI applications. We asked that these articles engage in critical, sociotechnical, and ethical considerations of the transformative effects of the rapid proliferation of foundation models, LLMs, and generative AI applications (1) in the context of practices of scientific research, teaching, scholarship, and academic publication; and (2) in broader social, cultural, economic, political, and legal contexts. Areas of focus could include:

    Context of scientific research, teaching, scholarship, and academic publication

    • challenges to research integrity rooted in the integration of foundation models and generative AI applications into the scientific method given their opacity and inability to access ground truth

    • challenges to scientific ingenuity and discovery deriving from reliance on foundation models which draw on data from past insights to generate inferences—potentially engendering paradigm lock-in and stifling novelty—and which fail to extrapolate effectively when world knowledge changes or data distributions drift over time

    • challenges to research integrity and originality related to reliance on foundation models in scientific writing; risks of plagiarism, authorial misrepresentation, and scaled academic dishonesty

    • risks of diminishing or weakening the writing and critical thinking skills of researchers and students due to overreliance on generative AI tools to produce arguments and text


    Broader social, cultural, economic, political, and legal contexts

    • challenges in reducing the systemic biases, discriminatory use of resources, and negative cultural impacts of growth and increasing dominance of AI

    • challenges related to algorithmic bias and discrimination that arise in foundation models, LLMs, and generative AI applications from the existence of harmful social stereotyping and demeaning, oppressive, disempowering, or discriminatory behavioral patterns in the training data

    • challenges related to the replication of hateful, harassing, or abusive language, imagery, or other learned representations in foundation models, LLMs, and generative AI applications trained on data containing such harmful content

    • challenges related to ‘value lock-in’ from static data containing crystallized social norms that can be of a discriminatory or harmful character

    • challenges related to failure modes of content moderation filters and technical security patches added to generative AI tools to prevent misuse or abuse (e.g., ‘jailbreaks’ and workarounds), allowing users to surface harmful or discriminatory content and to weaponize these tools for harmful or criminal purposes

    • challenges related to cybersecurity threats posed by misapplication of generative AI tools by users who are able to bypass preventative content filters and create new strands of polymorphic malware, fraudulent phishing campaigns and infostealers, ransomware and dark web marketplace scripts, and code that helps hackers evade detection

    • risks related to the misuse use of generative AI tools by users who are able to bypass preventative content filters to engage in bioterrorism, biowarfare, chemical warfare, bomb making, and other hostile activities

    • challenges related to data leakage, exposure of sensitive information, and violation of privacy and data protection rights by the use of generative AI applications

    • challenges related to the exploitation and elimination of skilled human labor by generative AI tools that are trained on data produced by such labor and then automate, replicate, and replace associated productivity (e.g., voice technologies that are trained on actors’ work without consent and then emulate and replace them for commercial voice-overs and narration)

    • challenges related to the differential performance and variable functioning of foundation models, LLMs, and generative AI applications for underrepresented cultural, social, or language groups

    • challenges related to the deceptive anthropomorphism of conversational agents or other humanlike interaction platforms powered by generative AI; risks of the “ELIZA effect,” infringement on dignity, and manipulation

    • challenges related to the misunderstanding of foundation models’ capacity for understanding or sentience (claims that they can ‘think,’ ‘believe,’ ‘understand,’ etc.) —that is, the erroneous projection of interpretive ability and intellectual competence onto systems that are predicting the probability of word sequences based on their statistical distribution in the large language corpus on which such systems have been trained

    • challenges related to the scaled production of disinformation, misinformation, and propaganda by misused, abused, or irresponsibly deployed generative AI applications that can flood the digital public square with misleading and nonfactual content, undermining social trust and the integrity of interpersonal communication

    • challenges related to the distortion or poisoning of downstream data sets and language corpuses by the online digital traces produced by generative AI applications themselves (scaled production of misleading or nonfactual digital content produced by generative AI becomes a part of humanity’s digital archive, baking corresponding corruptions into the underlying data distributions of future data sets)

    • challenges related to the centralization of research and innovation capacity for the development of generative AI systems in the arms of big-tech firms who control data access and compute infrastructures; risks arising from corresponding research agendas being shaped and driven by private commercial interests while, at the same time, critically affecting the public interest in myriad ways

    • challenges related to macroscale economic effects of the proliferation of generative AI systems such as mass labor displacement, expansion of income inequality, elimination of vulnerable industries and labor subpopulations (such as the creative professions), concentration of economic power in the hands of the owners of the means of AI production, exacerbation of wealth polarization, and substantial increases in global inequality

    • challenges related to the transformation of the workplace by generative AI applications and the automation of service-oriented and skills-based tasks, affecting the dignity of work, labor equity, fair pay, fair conditions, and worker wellbeing, creativity, and autonomy

    • challenges related to the environmental costs and biospheric impacts of training, developing, and using foundation models, LLMs, and generative AI tools

    • challenges related to the contextualization of AI in infrastructure, commerce, government, and related fields to promote social trust—what do we need to do to promote and ensure ethical behavior, adequate transparency, accuracy, security, privacy, and so forth?

    • challenges related to the situating of AI in legal contexts—determination of authorship and IP, expansion of copyright, who is responsible and who is liable when AI is involved in an infraction, and so on

    • challenges in promoting humanity over AI—should we have a right to a human decision-maker when we disagree with an AI-driven decision? Should we have a right to opt out of automated systems and still have the option of receiving the same (or similar) products and services?

Reflecting on the continuing relevance of this extensive list of themes and issues (originally composed over a year ago), it is remarkable just how many of the challenges we signaled then remain largely open and unaddressed. This is perhaps prime evidence that the GenAI revolution has, to some degree, triggered future shock. Though academic research energies have exponentially increased in this area since the commencement of GenAI’s industrial age, the translation and transfer of the insights emerging from this burgeoning body of scholarship to current international AI policy and governance discussions have so far been scarce, and actual policy impacts arguably even scarcer. While the contributions to our Future Shock Policy Forum in this special issue attempt to initiate steps toward the redress of this impact deficit, our broader multidisciplinary community of academic researchers in data science and AI still faces urgent questions: How do we, as a community of practice, bridge the unsustainable gap between the haphazard rush forward of GenAI commercialization and the application of robust and critical research insights and good data science to rein in such heedless behavior? How do we draw on the outcomes of this research to directly inform the development of governance controls, which sufficiently respond to the practical challenges that emerge from such ecosystem-level irresponsibility and incaution? How do we direct the energies of academic research toward advancing the development and use of GenAI systems for the public good?

What We Feature With This Initial Launch

It is, in fact, with all this in mind that we determined the format of this special issue. First, to gather original research, we put together a more conventional general section in which we could assemble articles that were representative of far-reaching efforts within the academic research community to understand various aspects of GenAI-prompted future shock. The six articles in this general section of this initial launch provide glimpses into the range of interventions that the special issue will roll out until its completion on December 2, 2024 (just after the 2-year anniversary of the launch of ChatGPT). The article by Jing Liu and H. V. Jagadish (2024) on “Institutional Efforts to Help Academic Researchers Implement Generative AI in Research” hit home for both of us, as academic researchers. Liu and Jagadish discuss how generative AI is (future) shocking “the traditional academic research model in fundamental ways.” Their concerns range from ill-preparedness among researchers who do not know how to responsibly use or apply GenAI technologies to their work, to research outcomes created by rushed adoption that lack in ethics, rigor, and reproducibility. The authors stress that these concerns are not unique to generative AI, “but could also be true for other upcoming and similarly disruptive technologies,” that is, other future shocks. The article calls for research institutions to develop new mechanisms to help researchers more responsibly adopt especially disruptive technologies that can cause seismic changes.

It is worth noting that generative AI has disrupted the educational enterprise in universities to a greater extent and faster than their research activities, as detailed in the article “What Should Data Science Education Do With Large Language Models?” by Xinming Tu, James Zou, Weijie Su, and Linjun Zhang (2024), featured in the latest (regular) issue of Harvard Data Science Review.

The article in this special issue by Q. Vera Liao and Jennifer Wortman Vaughan (2024) on “AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap” argues forcefully that “It is paramount to pursue new approaches to provide transparency—a central pillar of responsible AI—for LLMs,” and that “we must do so with a human-centered perspective” because of the contextual nature of the goals of different stakeholders.

The article provides an overview of existing approaches to achieving transparency across the AI and machine learning lifecycle and provokes readers to contemplate the applicability, or even the feasibility, of these methods for LLMs. Ultimately, Liao & Wortman Vaughan argue for a needs-responsive approach to LLM transparency that draws on insights from the study of human-computer interaction, centering “lessons learned about how people process, interact with, and make use of information.” The article demonstrates well the need to engage concomitantly in wide-ranging brainstorming, deep-dive introspection, and reflection on lived experience to guard ourselves as much as possible against the turbulence of future shock.

The article on “The Turing Transformation: Artificial Intelligence, Intelligence Augmentation, and Skill Premiums” by Ajay Agrawal, Joshua Gans, and Avi Goldfarb (2024) provides another deep contemplation that has immediate and lasting ramifications. Rather than seeing task automation driven by AI systems as inevitably leading to negative outcomes for workers and employment prospects, the authors argue that “task automation, especially when driven by AI advances, can enhance job prospects and potentially widen the scope for employment of many workers.” These positive effects are driven, they claim, by

the potential for changes in the skill premium where AI automation of tasks exogenously improves the value of the skills of many workers, expands the pool of available workers to perform other tasks, and, in the process, increases labor income and potentially reduces inequality.

They coin the phrase “Turing Transformation” to identify this mechanism and call for AI researchers and policymakers to acknowledge that “one person’s substitute is another’s complement,” suggesting that the automation-driven replacement of jobs that are occupied by and benefit those who enjoy outsized incomes from them can potentially reduce inequality and create opportunities for others not at the top of the income distribution.

In their contribution, “Can ChatGPT Plan Your Retirement?: Generative AI and Financial Advice,” Andrew Lo and Jillian Ross (2024) also explore the transformation of work in the GenAI era, but they zoom in on the challenges that automation poses to the expertise-driven professional practices in which LLMs can be applied, such as financial advising, medicine, law, accounting, education, psychotherapy, and marketing. They focus, in particular, on three problem areas facing most of these LLM applications: “domain-specific expertise and the ability to tailor that expertise to a user’s unique situation, trustworthiness and adherence to the user’s moral and ethical standards, and conformity to regulatory guidelines.” To illustrate the depth and scale of these challenges, Lo and Ross look at the high-stakes and heavily regulated domain of financial advice—a practical sphere that has a long history of assistive automation, for example, the robo-advisors (automated portfolio management, trading, and wealth management platforms) which first emerged in wake of the 2007–2008 financial crisis. While Lo and Ross stress the current inchoate condition of contextually and humanly responsive financial advisory GenAI systems, they propose an “evolutionary approach” to the development of these applications that “incorporate[s] some notion of selection, evolution, and fitness into the ongoing training of LLMs.”

Our final launch contribution, “How Can Large Language Models Help Humans in Design and Manufacturing?” is an impressive two-part exploration of new opportunities and challenges in the area of generative design—a field of research and innovation that aims to employ LLMs and GenAI systems across the product design and manufacturing life cycle. In Part 1, “Elements of the LLM-Enabled Computational Design and Manufacturing Pipeline,” Liane Makatura et al. (2024a) explore two running examples of possible uses of GenAI: a static design problem (a cabinet) and a dynamic system design problem (a quadcopter). They find that, “while LLMs are versatile and widely knowledgeable about the entire manufacturing pipeline and many target design domains, they currently tend to make mistakes and can require human teaming on hard problems.” Particular challenges include “a lack of spatial reasoning capabilities, causing them to struggle with geometric complexity in design,” an inability to seamlessly scale to complex problems or long prompt-response conversations, and a lack of fluency in topics where data is scarce or unavailable at scale. “Despite these limitations,” the authors argue, LLMs “facilitate rapid iteration, and, when used judiciously, can still solve hard tasks when a user aids GPT-4 in decomposing problems at the module level.” Part 2, “Synthesizing an End-to-End LLM-Enabled Design and Manufacturing Workflow” (Makatura et al., 2024b) then helpfully synthesizes the elements explored in the first part, focusing on how each LLM-augmented step can be composed into end-to-end realizations of physical, functioning devices.

Future Shock Policy Forum

Our second goal in organizing this special issue was to provide a dynamic space for discussion and debate on the policy and governance challenges surrounding the design, development, and use of GenAI technologies. As we initially contemplated how we could provide readers with some immediate big-picture insights into these far-reaching challenges, we came up the idea of publishing a Policy Forum, which could create a living snapshot of the salient policy discussions that have been happening at an inflection point in the history of technology. The explosion of generative AI applications in the wake of ChatGPT’s launch in November 2022 has raised urgent questions about how to govern and regulate these powerful technologies at both national and international levels. How can policymakers, regulators, civil society organizations, and members of the public swiftly and effectively respond to the far-reaching risks posed by foundation models and generative AI? How can these risks be weighed against the potential positive impacts of these technologies on people, society, and the planet? Are binding international regulatory and governance regimes needed given the global scale of the threats posed by the possible weaponization, misuse, or unintended consequences of foundation models and generative AI? Are such binding regimes even possible given existing geopolitical dynamics and priorities?

The Policy Forum section of this special issue collects short, op-ed style position papers and policy analyses on these topics from leading public sector and civil society organizations from around the world. This initial policy forum launch features a broad range of pieces that examine different aspects of how the GenAI revolution has presented “massive adaptational” challenges for (and put immense pressure on) existing social institutions, policy norms, regulatory regimes, and governance practices. In putting this section together, we have prioritized drawing together multisector, cross-disciplinary, and geographically diverse expertise, and we have sought to spotlight the broad spectrum of lived experiences of those affected by these technologies. These pieces are led by a detailed introductory editorial by David Leslie and Antonella Maia Perini (2024), providing both a much-needed aerial view of the international GenAI policy and governance ecosystem and a timely reflection on how the rapid industrialization of generative AI has triggered future shock for the global AI policy and governance community.

Don’t Be Too Shocked

Future shocks (and future “future shocks”) will undoubtedly, well, shock us. Many of our current contemplations may turn out to be wrong, or even shockingly wide of the mark. Uncertainty is a feature, not a bug. And yet, the process of undertaking these wide-ranging contemplations will also undoubtedly augment human intelligence, individually and collectively. Confronting uncertainty (and the unknown) is an unchosen, if inevitable, human burden, but it is equaled by our unique power to collaboratively reflect on past learning and present experience to shape informed visions for our collective futures. Such an intergenerational capacity to share learning and experience in the ends of collaborative future-making is the very hallmark of human intelligence. It endows us with the unique aptitude for collective artifice—the faculty for social creativity and historical self-transformation that sets us apart from the generative ‘artificial intelligence’ systems and LLMs we build and deploy.

When all is said and done, these systems are computational creations rather than contemplating creators. They are, as Murray Shanahan puts it, “mathematical models of the statistical distribution of tokens in the vast public corpus of human-generated text” (Shanahan, 2024, p. 70). Rather than warm-blooded agents of history compelled to deftly walk the tightrope of uncertainty, they are inert mapping functions. The patterns they reproduce (in mathematically transforming inputs to outputs) are the products of the statistical distribution of the history of language use and of the untold labors of the innumerable speaking, interacting, and co-creating humans whose linguistic and symbol-mongering activities have been captured in the digital archive. These systems are, as a consequence, entirely disconnected from the actual trials and triumphs of history. They are thus, unlike us humans, characteristically not subject to the shocks of future shock (and future “future shocks”).

These final points should act as a critical sense-check to how we have thematically framed this special issue. When we refer to GenAI as triggering future shock, some may see this as implying that the technology itself operates as an agent of history, endogenously causing or initiating future shocks. But we have to be careful here. Subtly granting agency and initiative to pulseless software programs can lead to a misguided sense of technological determinism, which erroneously assumes the inevitability of the development of runaway technologies and leads us into complacency in the face potential social harm. This can send us ‘into a state of shock,’ paralyzing us and displacing the very capacities for social creativity and historical self-transformation that are preconditions of human future-making. Instead, this volume of Harvard Data Science Review aims to interrogate future shock as originating in the all-too-human choices that developers and technologists have made in designing, producing, deploying, and commercializing GenAI technologies. In just this sense, we hope to put humans—as makers of technology and as originators of the beliefs and values that steer the pursuit of AI innovation and the formulation of technology policy and governance regimes—in the driver’s seat. We invite all our human readers to join us in this driver’s seat to embark on a contemplative journey of collaborative reflection and future-making, informed by the kind of visionary explorations and careful research that are sampled in the contributions to this special issue.


Disclosure Statement

David Leslie and Xiao-Li Meng have no financial or non-financial disclosures to share for this editorial.


References

Agrawal, A., Gans, J., & Goldfarb, A. (2024). The Turing transformation: Artificial intelligence, intelligence augmentation, and skill premiums. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.35a2f3ff

Grasshoff, A. (1972). Future Shock [Film]. Metromedia Producers Corporation.

Leslie, D., & Perini, A. M. (2024). Future Shock: Generative AI and the international AI policy and governance crisis. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.88b4cc98

Liao, Q. V., & Wortman Vaughan, J. (2024). AI transparency in the age of LLMs: A human-centered research roadmap. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.8036d03b

Liu, J., & Jagadish, H. V. (2024). Institutional efforts to help academic researchers implement generative AI in research. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.2c8e7e81

Lo, A. W., & Ross, J. (2024). Can ChatGPT plan your retirement?: Generative AI and financial advice. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.ec74a002

Makatura, L., Foshey, M., Wang, B., Hähnlein, F., Ma, P., Deng, B., Tjandrasuwita, M., Spielberg, A., Owens, C., Chen, P. Y., Zhao, A., Zhu, A., Norton, W., Gu, E., Jacob, J., Li, Y., Schulz, A., & Matusik, W. (2024a). How can large language models help humans in design and manufacturing? Part 1: Elements of the LLM-enabled computational design and manufacturing pipeline. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.cc80fe30

Makatura, L., Foshey, M., Wang, B., Hähnlein, F., Ma, P., Deng, B., Tjandrasuwita, M., Spielberg, A., Owens, C., Chen, P. Y., Zhao, A., Zhu, A., Norton, W., Gu, E., Jacob, J., Li, Y., Schulz, A., & Matusik, W. (2024b). How can large language models help humans in design and manufacturing? Part 2: Synthesizing an end-to-end LLM-enabled design and manufacturing workflow. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.0705d8bd

Shanahan, M. (2024). Talking about large language models. Communications of the ACM, 67(2), 68-79. https://doi.org/10.1145/3624724

Toffler, A. (1970). Future Shock. Bantam Books.

Tu, X., Zou, J., Su, W., & Zhang, L. (2024). What should data science education do with large language models? Harvard Data Science Review, 6(1). https://doi.org/10.1162/99608f92.bff007ab


©2024 David Leslie and Xiao-Li Meng. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?