Skip to main content
SearchLoginLogin or Signup

Discerning Audiences Through Like Buttons

Published onJan 31, 2024
Discerning Audiences Through Like Buttons
·

Column Editor’s Note: The ‘like button’ is a ubiquitous and infamous feature of social media platforms. ‘Likes’ ostensibly allow users to interact and engage with one another, but platform developers hope that data generated by users’ likes allows them to model, predict and even manipulate both individual and collective affective states. This Mining the Past column by communication scholar Carina Albrecht explores the history of the like button from “Little Annie,” developed at CBS in the mid-twentieth century, to the Cambridge Analytica scandal. Throughout this history, researchers and tech developers hoped to make ‘subjectivities’—emotions, preferences, personalities, political orientations—into ‘objectivites’; they sought to turn inner worlds into profitable data. Albrecht’s history reveals that the like button is best understood not as a passive recorder of preexisting affect and sentiment, but rather as a data technology that generated the emotive effects the button claimed to measure.

Keywords: like button, audiences, social media, psychology, Lazarsfeld-Stanton program analyzer


Introduction

In 2018, whistleblower Christopher Wylie revealed that Cambridge Analytica had acquired and used the data of millions of Facebook users to create psychological profiles of voters for targeting ads on social media for Donald Trump’s 2016 U.S. presidential campaign (Cadwalladr & Graham-Harrison, 2018; Confessore, 2018). It immediately became a scandal. In addition to concerns about the data breach, many questioned the ethics and efficacy of microtargeting audiences using psychological profiles created from social media data (Gibney, 2018). The like button was at the center of the debate: Wylie claimed Cambridge Analytica used the data about people’s ‘likes’ in combination with personality quizzes to predict Facebook users’ personality and political orientation, craft political ads toward these personality profiles, and target other Facebook users with ads based on the personality their likes predicted for them (Hern, 2018). Whether Cambridge Analytica’s strategy was effective in Trump winning the presidency is still open for debate. However, the episode highlighted to the broader public that the like button was not a simple engagement feature for the platform but a data generation machine—that could be at the same time profitable and problematic. Other scandals and whistleblowers followed this episode, and many people now fear that social media platforms have a distinct power to manipulate their users based on the data they collect about their preferences (Mac & Kang, 2021; Orlowski, 2020; Zuboff, 2019).

However, the idea that media content can be crafted based on data from audiences’ ‘likes’ is not new. In particular, the belief that it is possible to get audiences to indicate what they like with the click of a button emerged long before social media platforms implemented their thumbs-up and heart-shaped buttons—with the help of tools developed for experimental psychology. Psychologists were among the first to seek to transform human feelings into data, and these efforts did not go unnoticed by media researchers. Since the 1930s, as radio industries started to sell advertisers space in their programming, data about audiences’ preferences became essential to media businesses. Media administrators had to create, profile, package, and price audiences using some data as evidence of their value to the advertisers, with considerable challenges: collecting this data required dedicated research and planning, and was conventionally done through costly focus groups and experimentation in collaboration with psychologists (Ang, 1991; Gitlin, 2005; Napoli, 2003). For these reasons, media industries have always desired technological apparatuses that could make this work easier, so it is no coincidence that the like buttons became an enormous financial success for Facebook and other social media platforms (Bucher, 2021; Gerlitz & Helmond, 2013). Crucially, the trajectory of the like button, as described next, shows how the use of the ‘like’ data has always been controversial and problematic.

The First Like Button

The idea of collecting data about audience preferences through a button first emerged from a partnership between Paul Lazarsfeld and Frank Stanton when they invented a machine known as the Lazarsfeld-Stanton Program Analyzer—which CBS staff later nicknamed “Little Annie,” and others gave it less affectionate names, such as “dingus” or that “damn black box” (Levy, 1982; Scannell, 2007). Lazarsfeld was an influential scholar of sociology and mass media in the United States during the mid-20th century, whose academic background included training in psychoanalysis, sociology, physics, and mathematics, and research work included developing mathematical and statistical models for communication and social research. Stanton was a researcher, and later an executive at CBS between the 1940s and the 1970s, and his work included developing and applying empirical social research methods at the broadcasting network. The program analyzer emerged from their collaboration on a project at Princeton University’s Office of Radio Research (ORR)1 that aimed to discover the motives behind people’s listening habits and what made radio programs popular (Müller-Doohm, 2005). The machine was a polygraph that simultaneously recorded multiple people’s reactions to a particular program in real time by having them press a green button if they liked the program they were listening to or a red button if they disliked it—refraining from pressing the button signified indifference (Fiske & Lazarsfeld, 1945; Levy, 1982; Millard, 1992; Scannell, 2007).

Before Lazarsfeld moved to the United States and joined the ORR, he was already interested in collecting data about people’s listening preferences using a machine. In the late 1920s, when he was still living in Austria and working at the Psychological Institute of the University of Vienna, he wondered why some people liked one type of music over another and if there were universal elements and characteristics of music that made it most enjoyable (Levy, 1982; Millard, 1992). However, he believed that using interviews to ask people about what they thought was enjoyable about music would rely too much on their memory and their ability to put subjective feelings into words, so he argued for a method or machine to collect data about the listener’s reactions that bypassed their verbal input (Danziger, 1990; Fiske & Lazarsfeld, 1945; Levy, 1982).2

Lazarsfeld first attempted to create this method using a metronome-like device and a pad. In his experiments, the research participant sat in a room and listened to a music recording, and whenever the metronome ticked, they noted on the pad if they liked or disliked the song at that moment (Levy, 1982; Millard, 1992). This method was very crude and challenging to implement with many people at the same time, so Lazarsfeld dreamed about the day in which the affordances of electronic radio—an emerging technology at the time—would be key for deploying better and more precise methods to collect this data (Lazarsfeld, 1951). A mechanical device that would do this job only materialized circa 1937, when Lazarsfeld and Stanton worked together on methods to collect data on audience preferences, and Lazarsfeld recalled his first attempts with the metronome (Levy, 1982). Stanton then contributed to the design of the program analyzer based on his previous experiences with audience size measurement, and a lab technician known to Stanton built the prototype (Lazarsfeld, 1951; Levy, 1982).

Not surprisingly, the prototype of the program analyzer was an adaptation of a polygraph often used in experimental psychology—a device that has a long history of being used as a ‘lie detector’ (Fiske & Lazarsfeld, 1945; Grubin & Madsen, 2005). It was connected to 10 ‘on-off’ buttons to track the reactions of 10 people simultaneously, so initially, there was only one option: to push a button when the participant ‘liked’ the radio program tested in the experiment (Levy, 1982). These buttons were connected to 10 pens that wrote on a roll of white paper whenever a button was pressed during the program—see Figure 1 (Fiske & Lazarsfeld, 1945; Levy, 1982). For the experiments, the participants sat together in a dimmed light studio—were sometimes offered cigarettes to ‘relax’—and were asked to press the button with their thumb as they listened to the test program (Peatman & Hallonquist, 1950). After a few sessions and participant feedback, Lazarsfeld and Stanton improved the device to include a ‘dislike’ button. The ‘like’ button then became a green button and was to be held in the right hand, and a red button that signified ‘dislike’ was added to be held in the left hand (Levy, 1982; Peatman & Hallonquist, 1950). After each session, the researchers could look at the collected data (see Figure 2) and, if necessary, use the data to ask the participants more details about what motivated their reactions (Fiske & Lazarsfeld, 1945; Peatman & Hallonquist, 1950).

Figure 1. Frank Stanton, left, and Paul Lazarsfeld with the Stanton-Lazarsfeld Program Analyzer. Image dated June 18, 1942. New York, NY. Credits: CBS via Getty Images.

Figure 2. An example of a report generated with data from the program analyzer (Peterman, 1940).

Skepticism and the Need for More Data

Not everyone liked the first like button. Some radio programmers, writers, and producers were excited to incorporate the method into their creative process, while others argued it was an interference—they feared that the data about the audience's likes and dislikes could dictate what they were expected to create (Ehrlich, 2008; Levy, 1982). There was also resistance inside the project. For example, Theodor Adorno—an influential philosopher and musicologist—was hired by Lazarsfeld to work on theoretical frameworks for the ORR but ended up being openly opposed to the fundamentals of the data collection, and condemned the effort to measure music preferences as a “simplification” (Müller-Doohm, 2005, p. 247). Similarly, Robert Merton—an influential sociologist and Lazarsfeld’s colleague at Columbia University—once described the experiments that used the program analyzer as a “strange spectacle”:

I enter a radio studio for the first time, and there I see a smallish group—a dozen, or were there twenty?—seated in two or three rows (...) These people are being asked to press a red button on their chairs when anything they hear on the recorded radio program evokes a negative response—irritation, anger, disbelief, boredom—and to press a green button when they have a positive response. For the rest, no buttons at all. (Merton et al., 1990, p. xvi; emphasis added)

Possibly, this skepticism eventually changed Lazarsfeld’s thoughts about using the program analyzer. In 1945, Lazarsfeld and Marjorie Fiske—a social psychologist—argued that there was a need for more qualitative data to understand better what audiences really like, so a combination of the following three methods for data collection should be used with the program analyzer: an analysis of the program's content, an analysis of the personal characteristics of the groups that are listening, and interviewing people directly to understand what the program means to them (Fiske & Lazarsfeld, 1945). They noted that discovering the cause of the listener reactions was one of the major problems of radio audience studies; hence, data about only the likes and dislikes counts were insufficient.

Fiske and Lazarsfeld’s (1945) conclusions and concerns are at odds with the further development of these methods in the late 20th century and the final triumph of quantitative data in audience research with the rise of social media. Eventually, they distanced themselves from research on the program analyzer, but the device continued to be used and improved by the media industries, particularly at CBS, under the nickname Little Annie, or Big Annie for larger versions of the device—see Figure 3 (Ehrlich, 2008; Millard, 1992); Peatman & Hallonquist, 1950). Since Lazarsfeld and Stanton never patented their invention, other media companies and research groups created different analyzer versions and called it simply ‘program analyzer’ (Ang, 1991; Levy, 1982). In the film industry, the program analyzer could be found under names such as ‘reactograph’ or ‘audience analyzer’ (Brockhaus & Irwin, 1958; Cirlin & Peterman, 1947).

Figure 3. “Big Annie,” the CBS program analyzer, with the audience in the studio. Photo dated January 28, 1947. New York, NY. Credits: CBS via Getty Images.

Other devices collected more refined data sets by giving research participants more options for expressing their reactions to the programs they watched or listened to. For example, in the 1940s, a device called the Hopkin Televote—used by George Gallup to test radio programs—presented research participants with a dial button to measure a spectrum between five different types of reactions: “like very much,” “simply like it,” “neutral,” “dull,” or “very dull” (Marx, 1955; Millard, 1992). In the early 1950s, the advertising company McCann-Erickson created a device called Televac, which provided the viewer with a joystick handle that they could switch to signal four different reactions, from mild to stronger (Millard, 1992). Further, several devices were used between the late 1970s and the late 1980s, particularly at TV stations. The Program Evaluation Analysis Computer (PEAC)—with sixteen reaction buttons on a keyboard for the audiences to choose from—was extensively used in advertising, political campaigns, health education, and television industries (Levy, 1982; Noseworthy, 1983). Also, a device called the VOXBOX was designed to be wired to the viewer's TV inside their house and had “excellent,” “informative,” “credible,” “funny,” “boring,” “unbelievable,” “dumb,” or “zap” buttons available for the audience choosing (Keegan, 1980).

As these devices proliferated, political campaigns also started to use them—but not without pushback. Since the early 1980s, a dial button device called Perception Analyzer has been used to collect data about voters’ reactions to political candidates when they are in political debates on TV (Newman, 1994). For some political scientists, the practice of evaluating politicians within the range of positive and negative using a dial button—rather than evaluating them for their actual political platforms—was “the most egregious insult to democracy” and “the prime illustration of the deterioration of political discourse, the substitute of sentiment for reason” (Grove, 1987). On the other hand, campaign managers viewed the device as a valuable tool for instant feedback, enabling the politicians to cater their phrases, intonation, and body language to the audience (Grove, 1987). Interestingly, the dial button device is still produced and used today, with options for a web browser or an app version of the button.3

Despite the earlier skepticism and persistent concerns that people's experiences with media encompass many other aspects that simple interactions with a button cannot encapsulate, the search for ‘like data’ has only intensified with the rise of social media. As these buttons leave the controlled environments of research groups to be part of the media infrastructure we interact with daily on digital platforms, it suggests that media companies stopped asking hard questions about their audiences’ experiences in favor of easier ways to sell them to advertisers. In this sense, the like data might not be ‘wrong’; instead, it serves a narrower purpose in the digital age.

Normalization and Regret

Even though the value and validity of the ‘like’ data go mostly unquestioned by those who produce content and advertise on digital platforms today, the like button has been the main target of much criticism about what is ‘wrong’ with social media. For example, Roger McNamee, a former adviser and investor of Facebook, said the like button was the “beginning of the end” of the company’s “good old days” (Carr, 2019). Likewise, Facebook’s like button creators, Leah Pearlman and Justin Rosenstein, both have quit their jobs at Facebook and publicly expressed deep regret about their roles in creating the button (Karppi & Nieborg, 2021). Even inside Facebook, teams were investigating to what extent the like button was the culprit of all its maladies—from disinformation to the decrease in mental health in teenagers (Isaac, 2021).

However, the belief in the validity and efficacy of the like data is not only the fuel that keeps these platforms running—by offering a straightforward way to package and sell audiences to advertisers—but it has also become the essence of what these platforms offer back to us. Therefore, the early fears of the like button data interfering with creative processes and dictating content production have materialized: in these platforms, we are trapped in a cybernetic feedback loop in which the machines offer us back more content like the ones we already consume, and we are trained to consume and react in predictable ways. In this human-machine assemblage, pressing or not pressing a button has become the only way audiences can express how they feel.


Acknowledgments:

Carina Albrecht would like to thank Dr. Wendy Hui Kyong Chun and Dr. Sun-ha Hong, fellow PhD students at Simon Fraser University, HDSR editors, and reviewers for their invaluable feedback on earlier versions of this research and this article.

Disclosure Statement

Carina Albrecht's research is supported by generous funding from the SSHRC Canada Graduate Scholarship and the SFU-Mellon Data Fluencies fellowship. The Digital Democracies Institute has also supported this publication by paying for the copyrighted images.


References

Ang, I. (1991). Desperately seeking the audience. Routledge.

Brockhaus, H. H., & Irwin, J. V. (1958). The Wisconsin sequential sampling audience analyzer. Speech Monographs, 25(1), 1–13. https://doi.org/10.1080/03637755809375220

Bucher, T. (2021). Facebook. Polity Press.

Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Carr, D. (2019, February 6). Why a former Facebook adviser says the “like” button was “beginning of the end” of company’s good old days. CBC. https://www.cbc.ca/radio/thecurrent/the-current-for-february-6-2019-1.5007535/why-a-former-facebook-adviser-says-the-like-button-was-beginning-of-the-end-of-company-s-good-old-days-1.5007542

Cirlin, B. D., & Peterman, J. N. (1947). Pre-testing a motion picture: A case history. Journal of Social Issues, 3(3), 39–41. https://doi.org/10.1111/j.1540-4560.1947.tb02212.x

Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The scandal and the fallout so far. The New York Times. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Ehrlich, M. C. (2008). Radio Utopia: Promoting public interest in a 1940s radio documentary. Journalism Studies, 9(6), 859–873. https://doi.org/10.1080/14616700802227837

Fiske, M., & Lazarsfeld, P. F. (1945). The Columbia Office of Radio Research. Hollywood Quarterly, 1(1), 51–59. https://doi.org/10.2307/1209589

Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web. New Media & Society, 15(8), 1348–1365. https://doi.org/10.1177/1461444812472322

Gibney, E. (2018, March 29). The scant science behind Cambridge Analytica’s controversial marketing techniques. Nature. https://doi.org/10.1038/d41586-018-03880-4

Gitlin, T. (2005). Inside prime time. Routledge.

Grove, L. (1987, November 13). Candidates experiment with instant feedback: Voters linked to computer watch TV debate. The Washington Post (1974-Current File), A8.

Grubin, D., & Madsen, L. (2005). Lie detection and the polygraph: A historical review. The Journal of Forensic Psychiatry & Psychology, 16(2), 357–369. https://doi.org/10.1080/14789940412331337353

Hern, A. (2018, May 6). Cambridge Analytica: How did it turn clicks into votes? The Guardian. https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie

Isaac, M. (2021, October 25). Facebook wrestles with the features it used to define social networking. The New York Times. https://www.nytimes.com/2021/10/25/technology/facebook-like-share-buttons.html

Karppi, T., & Nieborg, D. B. (2021). Facebook confessions: Corporate abdication and Silicon Valley dystopianism. New Media & Society, 23(9), 2634–2649. https://doi.org/10.1177/1461444820933549

Keegan, C. A. V. (1980). Qualitative audience research in public television. Journal of Communication, 30(3), 164–172. https://doi.org/10.1111/j.1460-2466.1980.tb02003.x

Lazarsfeld, P. F. (1951). Communication research and the social psychologist. In W. Dennis (Ed.), Current trends in social psychology (pp. 218–273). University of Pittsburgh Press.

Levy, M. R. (1982). The Lazarsfeld-Stanton Program Analyzer: An historical note. Journal of Communication, 32(4), 30–38. https://doi.org/10.1111/j.1460-2466.1982.tb02516.x

Mac, R., & Kang, C. (2021, October 3). Whistle-blower says Facebook “chooses profits over safety.” The New York Times. https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html

Marx, L. S. (1955). A study of audience reaction to the television film “What of Tomorrow.” [Unpublished master’s thesis]. Kansas State University. http://hdl.handle.net/2097/19302

Merton, R. K., Fiske, M., & Kendall, P. L. (1990). The focused interview: A manual of problems and procedures (2nd ed). The Free Press.

Millard, W. J. (1992). A history of handsets for direct measurement of audience responses. International Journal of Public Opinion Research, 4(1), 1–17. https://doi.org/10.1093/ijpor/4.1.1

Müller-Doohm, S. (2005). Adorno: A biography. Polity Press.

Napoli, P. M. (2003). Audience economics: Media institutions and the audience marketplace. Columbia University Press.

Newman, B. (1994). The marketing of the president: Political marketing as campaign strategy. SAGE Publications. https://doi.org/10.4135/9781483326702

Noseworthy, C. P. (1983). An examination of the validity of the Program Evaluation Analysis Computer as an evaluation instrument for instructional and informational programs [Unpublished master’s thesis]. Memorial University of Newfoundland. https://research.library.mun.ca/4303/

Orlowski, J. (Director). (2020). The social dilemma [Film]. Netflix.

Peatman, J. G., & Hallonquist, T. (1950). Geographical sampling in testing the appeal of radio broadcasts. Journal of Applied Psychology, 34(4), 270–279. https://doi.org/10.1037/h0057628

Peterman, J. N. (1940). The “program analyzer”: A new technique in studying liked and disliked items in radio programs. Journal of Applied Psychology, 24(6), 728–741. https://doi.org/10.1037/h0056834

Scannell, P. (2007). Media and communication. Sage.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.


©2024 Carina Albrecht. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?