Skip to main content
SearchLoginLogin or Signup

The Art of Randomness: Sampling and Chance in the Age of Algorithmic Reproduction

Published onOct 30, 2024
The Art of Randomness: Sampling and Chance in the Age of Algorithmic Reproduction
·

Keywords: generative AI, probability, randomness in art, creativity


Introduction

Across artistic mediums and throughout history, randomness has been a powerful muse for catalyzing human creativity. The author Pliny the Elder wrote of the Greek artist Protogenes drawing inspiration from a chance dab of a sponge (Pliny the Elder, 1991), the 8th-century calligrapher Zhang Xu would use his own hair as a brush during drunken episodes (Zhang Xu Calligraphy, 2024), and the artist Leonardo da Vinci, in his 1651 Treatise on Painting, advised to seek inspiration in “stained walls” (daVinci, 1956). The benefit of inspiration from randomness is clear: it helps us break out of our own minds and biases to sample more widely from the full range of possibility.

Today, as generative AI tools are increasingly incorporated into creative production, there are concerns about and evidence of an emerging algorithmic monoculture, a convergence to a status quo on both the supply and demand sides of the creative industry. On the demand side, algorithmic curation and decision-making can result in convergent preferences and behavior for media consumption (Fourcade & Healy, 2024; Kleinberg & Raghavan, 2021). On the supply side, creators are increasingly incorporating AI tools into media production by using generative models for both brainstorming and direct media synthesis, in domains as diverse as filmmaking, musical performance, screenplay writing, illustration, and beyond. While these tools may increase the efficiency of media production, they are heavily inclined toward repeating patterns in their training data and thus homogenizing creative outputs (Anderson et al., 2024; Doshi & Hauser, 2024; Epstein and Hertzmann, 2023; Levent & Shroff, 2023; Messeri & Crockett, 2024). For example, in a creative idea generation task, Anderson et al. found that the use of ChatGPT (compared to random draws from Oblique Strategies—a deck of cards with prompts to support creative thinking) produced more ideas on average, but the ideas were less semantically distinct and less divergent. How can artists and other AI tool users navigate this trade-off between efficiency and divergent creativity? How can these tools be reimagined to support divergent creativity?

Curiously, this specter of creative homogenization seemingly runs contrary to how these AI systems actually work. Modern AI tools have famously been called ‘stochastic parrots’ (Bender et al., 2021), relying on random seeds and other injections of randomness to produce (and improve the quality of) their outputs. For example, for much of the last decade, random AI text generation has relied on random sampling to combat ‘neural degeneracy,’ whereby the single most probable sequence of words was redundant and ‘glitchy.’ But now, as model development has evolved, particularly through the use of fine-tuning methods such as reinforcement learning from human feedback (RLHF) (Christiano et al., 2017; Ouyang et al., 2022), injecting randomization has proven to be increasingly unnecessary for high-fidelity outputs: current state-of-the-art models have refined their probability distributions to the point that the maximum likelihood output is now superior to randomized outputs (Bai et al., 2022; Kadavath et al., 2022). This shift also aligns with important efforts to produce factual, safe, and reproducible models, as random outputs risk being hallucinatory and harmful. While this trend of increasingly deterministic parrots is beneficial for solving real-world problems (e.g., asking a model how to treat the common cold), what will be the effects on creative production?

We argue that this departure from stochastic systems may worsen the homogenizing effects of AI-assisted creativity. Said another way, the engineering goals of building models that converge to high-fidelity outputs that maximize likelihood threatens a mode collapse that would flatten opportunities for serendipitous encounters with these models in creative contexts. To understand an alternative path forward, we trace a historical tradition of artists who have employed randomness and chance in other forms of art,1 locating analogous questions of creative agency and choice that connect intricately with modern questions of creative agency and choice with generative AI. Finally, we carry these ideas forward in a discussion of and call for stochasticity-induced ‘happy accidents’ in modern human–AI collaboration.

Dada Science

There is a rich tradition of artists who explicitly use randomness as a creative impetus itself, which crystallized in the first three decades of the 20th century. Experiencing the horrors of World War I, artists across the European continent felt reason and rationality were to blame for society’s woes (Malone, 2009): reason and rationalism had controlled European thinking since the Enlightenment, and now Europe was burning. Artists active in the Surrealist and Dada movements hoped to relinquish control and reason, placing their hopes in nonsense and chance as not just inspiration but rather as active guides for their art. This shift involved ostensibly removing subjectivity and choice from the sampling process: the artist would define a generator and pre-commit to following it wherever it would lead. This paradigm sought to liberate the artist from the shackles of their own interpretation, allowing an ‘unbiased’ concept to shine through. Yet from its inception, the line between rigorously following the dictates of chance and personal decisions of aesthetic sensibility has always been blurry.

Take for instance Jean Arp’s Untitled, executed in 1916–1917. The story goes that Arp made the work “by tearing paper into pieces, letting them fall to the floor, and pasting each scrap where it happened to land” (Arp, 1916–1917). Yet an untrained eye can easily detect that this story is likely apocryphal: the aesthetics of Arp’s collage are far too orderly. All the scraps share congruent orientations, many are aligned, and none overlap. While Arp may have been attempting to cede control to chance, he clearly constrained his randomization with great care (Robertson, 2006).

These nuances of when and how to cede control to randomness went on to define the next several decades of chance aesthetics. In time, works by artist Ellsworth Kelly shifted focus away from the realization of a generative process and toward the process itself. In 1951, when Kelly was struggling to capture the shimmering reflection of the river Seine in a composition he was trying to abstract, Kelly chose to follow personal encouragement from Arp to further explore chance in his compositions. To capture the shimmer of the river, in his painting Seine Kelly adopted a literal stochastic process of sortition in this work, drawing lots to determine the location of black versus white squares, as shown in his study for the painting. Changing the proportion of black versus white lots as he moved from column to column, Kelly’s work is a realization of a random process that he completely defined in its distribution before it was realized.

This idea of ‘chance aesthetics,’ ceding creative control to randomness, was taking root in some artistic circles. Into the second half of the 20th century, artists were increasingly focused on sampling from distributions they had carefully designed, thus further diminishing the subjectivity inherent in selecting a particular realization relative to others. Following Seine, Kelly worked to strip his art of impressionistic elements—no more shimmering rivers, no shadows—producing a series of collages and paintings derived from pure, mechanized chance. In the series of eight collages Spectrum Colors Arranged by Chance I-VIII, all from 1951, Kelly drew lots to determine the colors of squares according to an evolving set of carefully designed sampling rules (see his notebook #17, pp. 17–40, Kelly, 1952) While Kelly drew lots, other artists of the same era similarly carefully designed stochastic processes from which to sample. In 1951, John Cage began composing ‘aleatoric’ music using divination techniques to consult the I, Ching.2 A few years later in 1960, François Morellet executed a painting descriptively titled Random Distribution of 40,000 Squares Using the Odd and Even Numbers of a Telephone Directory, 50% Blue, 50% Red.

Of course, while Kelly and Morellet both colored thousands of squares strictly according to chance, each artist chose the colors, shapes, and dimensions of their work, and these choices strictly define the distribution that their work samples: artists in this movement were contributing distributions, defined by mechanical processes, at least as much as they were contributing physical artworks.

However, when Kelly went on to explore more discrete, combinatorial generators with his 1952–1953 work Red Yellow Blue White and Black with White Border, we see a markedly different approach to sampling realizations from a distribution. He painted this work on seven separate canvases, and proposed the work to be reconfigurable by the owner by changing the order of the panels to any of the many possible configurations. On a sheet of typewriter paper with the header Painting with a Hundred Variations (1953), Kelly chaotically hammered out what suggests itself as an exhaustive list—it is not—of ways the panels of red, yellow, blue, white, and so on can be reconfigured. In a separate study, 103 Studies for Seven Color Panels with Border (1953), Kelly even arranged 103 possibilities.

But in a 2002 letter, Kelly described how he reneged on his inclination to view Red Yellow Blue White and Black with White Border as a realization of a chance configuration: “after making the collage of variations, I decided the original configuration of the finished painting was my best solution and the only solution for the painting” (p. 37, Grynsztejn & Myers, 2002). Here, we observe a sharp departure from a commitment to randomness for randomness's sake. Rather, we see Kelly exploring the space of possibilities and taking a stand that, based on his sensibilities, one realization is ‘better’ than the others. His reneging here underscores a key tension in generative art: sometimes we are interested in the generative distribution, but often we care more about how individual realizations stand alone, with the generator primarily serving as a means toward an end.

This tension is crystalized in Jorge Luis Borges’s 1941 short story Library of Babel (Borges, 1999), which introduces a library that contains every possible book consisting of 410 pages of text in a given alphabet. The problem with such a library—suspending disbelief about its physical impossibility—is that almost all books are gibberish. Short strings of coherent language, let alone ideas, are extremely rare. And yet, all ideas are in the library, somewhere. Borges’s library explores what a uniform distribution over all possible text means, with a provocation to see how sparse and rare both coherent and creative work are within that distribution (Bottou & Schölkopf, 2023). Unlike Kelly’s Seine or Spectrum Colors Arranged by Chance, the analogy of Borges’s library highlights the fact that some books are ‘better’ than others, and just as with Kelly’s experience falling back on his chosen composition of Red Yellow Blue White and Black with White Border, the librarian of this library is principally interested in finding the ‘gem in the rough.’ Under this view of chance and curation, randomness is not an absolutist force to be followed for its own sake, but rather a muse that can help augment creativity by offering a departure from the expected.

Reclaiming Randomness in the Age of Generative AI

At their core, generative AI models are high-dimensional probability distributions. The media they generate are sampled realizations from these distributions. When you prompt GPT-4, it samples from the distribution of probable next words conditional on the preceding text. When you prompt Midjourney, images are sampled from a joint distribution over text and image (Mansimov et al., 2016), conditional on the given text. Lay users can easily experience this randomness by repeating the exact same prompt several times, with different results. The organizations that have built these massive models have effectively built (highly complex) generators in the tradition of Arp or Kelly. And this probabilistic interpretation of generative AI reveals a natural truth: their variance contains multitudes. In this view, sampling can meaningfully allow a user to explore possible representations of an idea by meandering through a latent ‘idea space’ and curating the model outputs that they find most meaningful (Zhou & Lee, 2023).

But just as Kelly painted the pixels of the Seine, when we sample from these distributions, we mirror the underlying patterns, politics, and poetics of those distributions: the space of possibility contained within these generators does not represent the full “Library of Babel,” but rather one peculiar corner filtered through both individual and collective subjectivities. For example, consider when a prompter asks a model to ‘be creative’ in the process of creating artifacts (see Figure 1). In-prompt calls for creativity necessarily enact and perpetuate narrow notions of what ‘creativity’ really is, rooted in the annotations codified in the training data. Whose notion of creativity? Whose values does it implicitly reflect? At the time of writing, technical communities and the art world do not have clear answers to these critical questions. Without a cogent response, prompting a model in such a way may ironically backfire, perpetuating monoculture and convergent, clichéd notions of creativity.

Figure 1. Images generated by Dall-E 2 for the prompt “A visual portrait of San Francisco,” compared to “A visual portrait of San Francisco. Be creative!” The latter incorporates elements from Vincent Van Gogh’s The Starry Night, one of the most recognizable paintings in Western art.

With the adoption of AI tools, we are entering what Kate Compton calls an era of “liquid art” ( Compton, 2023; Karth & Compton, 2023). Just as mechanical reproduction challenged the ‘aura’ of one-of-a-kind artworks like the Mona Lisa or The Starry Night (examples of ‘solid art’) (Benjamin, 2018), algorithmic reproduction challenges the authenticity or meaning of any one realization sampled from these generators. In response to David Cope’s generators of classical music (Adams, 2010), Compton coined the term Bach Faucet as a “situation where a generative system makes an endless supply of some content at or above the quality of some culturally-valued original, but the endless supply of it makes it no longer rare, and thus less valuable.” Through the lens of liquid art, any one particular AI-generated image is but a drop in the ocean, beholden to the same underlying processes of meaning-making as any other drop in that ocean. As Levent and Shroff say: “the model is the message” (Levent & Shroff, 2023). In this view, locating the primacy of meaningful artistic intent requires creators to critically examine the reinforced patterns that already exist within these models, as prompt curation alone risks treating a liquid artifact as a solid piece of art.

Rallying the Power of Divergent Creativity

In the 1956 Dartmouth summer workshop that helped launch the study of AI, the organizers identified in their proposal “randomness and creativity” as one of seven key challenges for “the artificial intelligence problem.” Tellingly, they wrote about randomness: “A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a [sic] some randomness” (McCarthy et al., 1955, p. 14).

In the face of models that actively perpetuate monoculture, we must take a different tack in order for these models to empower human creativity. Rather than being overwhelmed by the “slop” of these generators (Wallace-Wells, 2024), we must independently cultivate and refine our own unique and personal voice and reimagine these models as serendipity machines to amplify this personal voice via the unexpected meanders of injected randomness.3

So how do we transform AI tools from bland homogenizers into serendipity machines that can amplify creativity? This requires diligent work for both creative users of the tool and creators of these tools (Jones et al., 2024). For users, we must be careful when adopting AI recommendations that already exist within the possibility space of generative models. These easy shortcuts may make it harder to create something in one’s own unique style, which is intrinsically unique and novel. This commitment to personal style also requires understanding the underlying biases of these models (what it portrays as creative, cliché, or high quality) and actively revealing and disrupting those representations. For tool builders, rather than trying to produce idealized modal responses, when designing for creative production, we must re-inject randomness, indeterminacy, and mess back into the sampling procedures of these models (Bradley et al., 2023; Ippolito et al., 2019; Zhang et al., 2024). This could also involve introducing more meaningful human control into these systems to turn them into instruments (Akten, 2021), via parameters like temperature, techniques like classifier-free guidance, and methods like ControlNet, IP-Adapter, and LoRAs. Beyond fine-grained control, it is also important to foster more “public intelligence” via more active, transparent, and participatory engagement with models and their inputs and outputs (Jones et al., 2024).

With more diverse, diffuse, and ‘glitchy’ model outputs, discerning and critical users can make their own meaning from the raw material. For example, recent work has shown that using AI-generated images as inspiration works by sparking lateral thinking via ‘happy accidents’ (Epstein et al., 2022). People came up with ideas for a future utopia without the help of AI, and then were shown what the generator created to critique it and refine their vision. In that work, we found that it was the visual indeterminacy and glitchy nature of an early image generator, VQ+GAN, that drove insights. With a more modern, high-fidelity, and convergent model, this study might not have seen creativity gains. Artists have echoed this same idea with GPT-2, arguing that its bumbling ineptitude resulted in ‘happy accidents’ more interesting than the frequently plastic platitudes of more advanced systems such as GPT-4.

The history of chance aesthetics, as well as these examples, illuminates a core tension at the center of algorithmic reproduction: the twin objectives of building convergence and divergence into generative systems. In the name of convergent values like factuality and safety, we have built systems that faithfully parrot the training data. Yet such behavior necessarily comes at the cost of the divergent serendipity and ‘happy accidents’ that lie at the heart of true creative production.


Acknowledgments

The authors thank Rebecca Scolnick and the Moonlight platform for the tarot reading which determined the author order. The authors also thank Aaron Hertzmann, Jan Overgoor, Hope Schroeder, Amy Smith, Caroline Jones, Joel Simon, Tatsu Hashimoto, Alex Calderwood, and Zhipeng Liang for helpful feedback, comments, and ideas throughout the editorial process. Thanks to Micah Epstein for graphic design consultation. Special thanks to Stochastic Labs, the Stanford Digital Economy Lab, and Project Liberty for support and for informing this work, and the Visual Artists, Technological Shock, and Generative AI working group for inspiration.

Author Contributions

Author order was determined by a tarot reading. In collaboration with Rebecca Scolnick on the Moonlight platform, JU and ZE each drew three cards, which we identified as connected to JU and ZE, respectively. Then Rebecca pulled the Eight of Pentacles, which we all identified as pointing to the cards connected to JU.

Disclosure Statement

Johan Ugander and Ziv Epstein have no financial or non-financial disclosures to share for this article.


References

Adams, T. (2010, July 10). David Cope: ‘You pushed the button and out came hundreds and thousands of sonatas.’ The Guardian. https://www.theguardian.com/technology/2010/jul/11/david-cope-computer-composer

Akten, M. (2021). Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression [Doctoral dissertation]. Goldsmiths, University of London. https://research.gold.ac.uk/id/eprint/30191/

Anderson, B. R., Shah, J. H., & Kreminski, M. (2024). Homogenization effects of large language models on human creative ideation. ArXiv. https://doi.org/10.48550/arXiv.2402.01536

Arp, Jean (Hans). (1916–1917). Untitled [Collage with squares arranged according to the law of chance]. The Museum of Modern Art. https://www.moma.org/collection/works/37013

Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., … Kaplan, J. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. ArXiv. https://doi.org/10.48550/arXiv.2204.05862

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Benjamin, W. (2018). The work of art in the age of its mechanical reproduction. Routledge.

Boden, M. A., & Edmonds, E. A. (2009). What is generative art? Digital Creativity, 20(1–2), 21–46. https://doi.org/10.1080/14626260902867915

Borges, J. L. (1999). Collected fictions. Penguin.

Bottou, L., & Schölkopf, B. (2023). Borges and AI. ArXiv. https://doi.org/10.48550/arXiv.2310.01425

Bradley, H., Dai, A., Teufel, H., Zhang, J., Oostermeijer, K., Bellagente, M., Clune, J., Stanley, K., Schott, G., & Lehman, J. (2023). Quality-diversity through AI feedback. ArXiv. https://doi.org/10.48550/arXiv.2310.13032

Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2017). Deep reinforcement learning from human preferences. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems (Vol. 30, pp. 4299–4307). Curran Associates. https://papers.nips.cc/paper_files/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html

Compton, K. (2023, February 20). Liquid art—machine learning art for exploration and interactivity [Video]. YouTube. https://www.youtube.com/watch?v=FKNj64OgO4E

daVinci, L. (1956). A treatise on painting [codex urbinas latinus]. Princeton University Press. https://www.gutenberg.org/ebooks/46915

Doshi, A., & Hauser, O. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances 10.28 (2024): eadn5290. https://www.science.org/doi/10.1126/sciadv.adn5290

Eno, B., & Rubin, R. (Hosts). (2021, August 17). Extended cut: Brian Eno and Rick Rubin. [Audio podcast episode]. In Broken Record with Rick Rubin, Malcolm Gladwell, Bruce Headlam and Justin Richmond. Spotify. https://open.spotify.com/episode/2QDaIwhmtrBD8RtZoTCyoW?si=nzfPY9tHTYqiE-_7XEmYIA

Epstein, Z., Hertzmann, A., and the Investigators of Human Creativity. (2023). Art and the science of generative AI. Science, 380(6650), 1110–1111. https://doi.org/10.1126/science.adh4451

Epstein, Z., Schroeder, H., & Newman, D. (2022). When happy accidents spark creativity: Bringing collaborative speculation to life with generative AI. International Conference on Computational Creativity.

Fourcade, M., & Healy, K. (2024). The Ordinal Society. Harvard University Press.

Grynsztejn, M., & Myers, J. (2002). Ellsworth Kelly in San Francisco (1st ed.). University of California Press.

Ippolito, D., Kriz, R., Sedoc, J., Kustikova, M., & Callison-Burch, C. (2019). Comparison of diverse decoding methods from conditional language models. In A. Korhonen, D. Traum, & L. Màrquez (Eds.), Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 3752–3762). Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1365

Jones, C., Gupta, H., & Ritchie, M. (2024). Visual artists, technological shock, and generative AI. https://doi.org/10.21428/e4baedd9.b4f754fd

Kadavath, S., Conerly, T., Askell, A., Henighan, T., Drain, D., Perez, E., Schiefer, N., Hatfield-Dodds, Z., DasSarma, N., Tran-Johnson, E., Johnston, S., El-Showk, S., Jones, A., Elhage, N., Hume, T., Chen, A., Bai, Y., Bowman, S., Fort, S., … Kaplan, J. (2022). Language models (mostly) know what they know. ArXiv. https://doi.org/10.48550/arXiv.2207.05221

Karth, I., & Compton, K. (2023). Conceptual art made real: Why procedural content generation is impossible. In P. Lopes, F. Luz, A. Liapis, & H. Engström (Eds.), Proceedings of the 18th International Conference on the Foundations of Digital Games (Article 71). Association for Computing Machinery. https://doi.org/10.1145/3582437.3587212

Kelly, E. (1952). Sketchbook #17, Paris & Sanary. 1950-52. [Spiral-bound sketchbook with pencil, ink, and cut-and-pasted colored paper]. https://www.moma.org/collection/works/419618

Kleinberg, J., & Raghavan, M. (2021). Algorithmic monoculture and social welfare. Proceedings of the National Academy of Sciences, 118(22), e2018340118. https://doi.org/10.1073/pnas.2018340118

Levent, I., & Shroff, L. (2023, April 23–28). The model is the message [Conference presentation]. The Second Workshop on Intelligent and Interactive Writing Assistants, Association of Computer Machinery’s CHI 2023 Conference on Human Factors in Computing Systems, Hamburg, Germany. https://programs.sigchi.org/chi/2023/program/content/96464

Lewis, G. E. (1996). Improvised music after 1950: Afrological and Eurological perspectives. Black Music Research Journal, 16(1), 91–122. https://doi.org/10.2307/779379

Malone, M. (2009). Chance aesthetics. University of Chicago Press.

Mansimov, E., Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2016). Generating images from captions with attention. ArXiv. https://doi.org/10.48550/arXiv.1511.02793

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. AI Magazine, 27(4), Article 4. https://doi.org/10.1609/aimag.v27i4.1904

Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49–58. https://doi.org/10.1038/s41586-024-07146-0

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. ArXiv. https://doi.org/10.48550/arXiv.2203.02155

Pliny the Elder. (1991). Natural history: A selection. Penguin Publishing Group.

Robertson, E. (2006). Arp: Painter, Poet, Sculptor. Yale University Press.

Wallace-Wells, D. (2024). H​ow Long Will A.I.’s ‘Slop’ Era Last? The New York Times. https://www.nytimes.com/2024/07/24/opinion/ai-annoying-future.html

Zhang Xu Calligraphy. (2024). China Online Museum. http://www.chinaonlinemuseum.com/calligraphy-zhang-xu.php

Zhang, Y., Schwarzschild, A., Carlini, N., Kolter, Z., & Ippolito, D. (2024). Forcing diffuse distributions out of language models. ArXiv. https://doi.org/10.48550/arXiv.2404.10859

Zhou, E., & Lee, D. (2023). Generative AI, Human Creativity, and Art. Social Science Reseach Network. https://doi.org/10.2139/ssrn.4594824


©2024 Johan Ugander and Ziv Epstein. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article. 

Comments
0
comment
No comments here
Why not start the discussion?