Last week, an opinion piece appeared in the New York Times, arguing that the advent of algorithmically derived human-readable content may be destroying our humanity, as the lines between technology and humanity blur. A particular target in this article is the advent of “robo-journalism,” or the use of algorithms to write copy for the news. 1 The author cites a study that alleges that “90 percent of news could be algorithmically generated by the mid-2020s, much of it without human intervention.” The obvious rebuttal to this statement is that algorithms are written by real human beings, which means that there are human interventions in every piece of algorithmically derived text. But statements like these also imply an individualism that simply does not match the historical tradition of how newspapers are created. 2
In the nineteenth century, algorithms didn’t write texts, but neither did each newspaper’s staff write its own copy with personal attention to each article. Instead, newspapers borrowed texts from each other—no one would ever have expected individualized copy for news stories. 3 Newspapers were amalgams of texts from a variety of sources, cobbled together by editors who did more with scissors than with a pen (and they often described themselves this way).
Newspapers have never been about individual human effort. They’ve always been about collaboration toward a common goal–giving every newspaper in every town enough material to print their papers, daily, semi-weekly, weekly, however often they went to press. Shelley Polodny states that digital outlets have caused us to “demand content with an appetite that human effort can no longer satisfy,” but news outlets have never been able to satiate that demand, as the Fremont Journal of December 29, 1854, acknowledges.
In order to produce enough copy, editors had to select texts rather than write them. The papers’ layout was formulaic. The same types of texts went on the same pages week after week, with only minor variations. Pieces were selected sometimes because of their content, but also sometimes because of their size–texts were sometimes cut into pieces in order to fit a space. 4 Texts of all kinds were printed without any knowledge of who wrote them or where they came from. In order to justify the printing of some of these texts, editors sometimes wrote justifications that were based on what they thought was the truth, and sometimes were pure fabrication. Editors sometimes ascribed texts to specific people, but more often than not (according to my preliminary investigations of this topic), they were wrong about the text’s origin. (Stay tuned for a future blog post about these paratexts.)
Every text that was printed in a newspaper in the nineteenth century had a human author. But the text was mediated through so many other hands by the time it was printed in most newspapers that its essential “human”-ness was lost. There were pieces in almost every issue written by the editor of the paper, but those pieces represented a very small minority. Even opinion pieces were sometimes snipped from other papers.
The advent of wire services removed the authors even farther from their text. Before the wire service, you could possibly assume at least a tenuous connection between newspapers who shared texts—they at least had to know about each other well enough to want to exchange papers. But once the Associated Press and other wire services came along, even those connections began to fray, as papers connected not to each other but to a central news agency (though exchanges were still an important part of newspapers’ lives well into the 20th century). 5
The difference between a human selecting texts to print in a newspaper in a formulaic and predictable order, and a human selecting words for a computer to print in a news outlet in a formulaic and predictable order, is a difference in scale, not in kind. Where an algorithm cannot write the type of text desired, perhaps newspaper editors will intervene in a more substantial way by writing their own pieces—or perhaps they will continue to reprint texts from other sources.
Shelley Podolny uses the term “author” in quotation marks to imply that algorithmically derived texts are not truly authored by humans, but the same quotation marks could easily be applied to many “authors” in nineteenth-century newspapers. Authorship was an illusion for almost all texts in those papers. Podolny asks, “What does ‘human’ even mean?” I would answer: It means using whatever is at hand in order to create the information we need or want. For the nineteenth century, it was scissors; for the twenty-first century, maybe it’s a computer program.
- The article also decries other types of algorithmically derived texts, but the case for computer-generated creative fiction or poetry is fairly well argued by people such as Mark Sample, and is not an argument that I have anything new to add to. ↩
- This post is based on my research for the Viral Texts project at Northeastern University. ↩
- In 1844, the New York Daily Tribune published a humorous story illustrating exactly the opposite, in fact—some readers preferred a less human touch. ↩
- We can see this happening by comparing different versions of reprinted texts in different newspapers. ↩
- In the 21st century, the AP and other news agencies provide much more copy than algorithms do—19th-century text-sharing practices are still alive and well. ↩