Speaking to algorithms? Rhetorical political analysis as technological analysis

In the last few years, research studies and opinion pieces have tried to account for the new polarisation and dealignment of US politics after Trump and the post-Brexit UK politics. It is now well-established both by academic research and by Facebook’s own research that Facebook leads to more polarisation in its users’ political views, but rhetorical analysis has not yet accounted for the role played by algorithms in political communication and persuasion. What does social media do to rhetoric? The situation of speech in social media is often treated like in a public sphere when it should not be. This misconception prevents rhetorical studies to take into consideration the question of technology. By using the recent literature in critical algorithm studies, I develop a new approach in rhetorical criticism. I argue here that the increasing agency that algorithms have acquired in delivering and mediating rhetoric means that we must consider the role played by intermediaries when examining rhetorical situations. This paper sheds light on what I call the four conditionalities of algorithms on rhetoric: (1) programmed speech content, (2) the verticalisation of political communication, (3) the new biases produced by digital media, and (4) the rhetorical machine learning.

In the last 10 years, rhetorical analysis has been flourishing in political studies. Leading studies have helped us to pay attention to the evolution of speech writing and performance. Rhetoric is a deeply creative practice and with an expansive repertoire of devices (Atkins, 2015;Finlayson, 2012;Hatzisavvidou, 2017). By using anecdotes, transforming the context, or adhering to the ritualistic characters of 'speech moments' in the British political calendar (Finlayson and Martin, 2008), orators constantly find new ways to express themselves. Rhetors work as close as possible to the common sense. They endorse 'established ideas while simultaneously advancing new ones' and 'what was once rhetoric later comes to be 'common-sense' premises to routine decisions' (Martin, 2015: 28, 33). In short, rhetorical analysts insist that it is through speeches that ideas are externalised and are made public. What is often not accounted for in the debate, however, is the situatedness of speeches in other media: the same speech will be packaged in multiple ways -truncated, edited, subtitled, remixed, or hijacked -depending on whether it is shared on a newspaper website, on the official party Twitter page or on an amateur's social media page. In parallel to this expanding scholarship on rhetoric, we have witnessed an increasing polarisation of political life in the United Kingdom and the United States with recent upheavals in 2016 such as the Brexit referendum and the election of Donald Trump. Analysts and journalists alike have accounted for this transformation of the common sense and the dealignment of the political landscape by focusing on the use of language, questioning the limits of acceptability and the effectiveness of the rhetoric used by 'populists'.
In this context, the return of rhetoric to the nucleus of political debate leads to a more contentious question: are we witnessing, together with processes of dealignment and polarisation, a complete transformation of rhetorical culture as a consequence of the recent upheavals in UK and US politics? While this is a question that can only be answered in the long-term, a clearer understanding of the unfolding scenario must take into account the media by which Trump managed to capture the political imaginary. Recent academic research as well as Facebook's own internal research have shown that Facebook polarises users (Horwitz and Seetharaman, 2020;Settle, 2018;boyd, 2018), and yet very little research has examined to the role of algorithms in rhetoric and in particular, the distribution of speeches. By turning to rhetoric, I want to update the frame of rhetorical analysis and show that 'rhetorical situations' (Bitzer, 1968;Martin, 2013b;Turnbull, 2017;Vatz, 1973) are not only constructed between orators and the situations, but they are essentially mediated and governed by software and algorithms. Algorithms are one more component in the assemblage of rhetorical situations (composed of speakers, arguments, context, effects). What interests me is to examine how algorithms can be held partly responsible for making speeches highly visible or for completely burying them.
Some important studies have analysed the transition from the print culture and radio to the television age and its impact on rhetoric. The space and time devoted to speeches were drastically reduced, forcing politicians to adapt and shorten their speech. 'Hour-long radio speeches gave way to thirty-and sixty-second ads' (Jamieson, 1988: 7). This timecompression was also coupled by new economy of televised images. Kathleen Hall Jamieson noted already in the 1980s that visual moments become more expressive than memorable words from great orators. When moving to the digital age and social media, we will need to assess whether the form and content of the speeches have been transformed by the new media ecology.
In this article, I use more specifically the recent literature in critical algorithm studies (Bucher, 2018;Gillespie, 2018a;Panagia, 2019) to develop a new approach in rhetorical criticism. A growing debate has taken place about the impact of digital media on politics and democracy (Chadwick, 2019;Crawford, 2016;Eubanks, 2017;Marwick and Lewis, 2017;Moore, 2018;Noble, 2018;Pasquale, 2016;Tufekci, 2017). Other approaches in political science are emerging to study the impact of social media on political knowledge or whether algorithms cause radicalisation (Ledwich and Zaitsev, 2019;Munger and Phillipps, 2019;Settle, 2018). Contrary to this last strand of political science, I am not concerned with online radicalisation and democracy, instead I aim to update the frame of rhetorical analysis by focusing on algorithms and social media. Very few scholars in rhetoric have examined the role of algorithms. Aaron Hess (2014) presents one of the few studies on this question using Kenneth Burke's notion of identification, others have looked at the rhetoric of algorithms (Brock, 2019). This paper instead turns to the increasing agency that algorithms have acquired in delivering and mediating rhetoric. This paper sheds light on what I call the four conditionalities of algorithms on rhetoric: (1) programmed speech content, (2) the verticalisation of political communication, (3) the new biases produced by digital media on rhetoric, and (4) the rhetorical machine learning. It is my contention that rhetorical analysis should integrate these four conditionalities to overcome its own technological blindspot. Before discussing these at more length, I show how to consider social media in rhetorical analyses and explain why social media platforms are not a public sphere.

What does social media do to rhetoric?
Jamieson notes that with the advent of the television, political speeches were no longer printed in full in newspapers unlike in the 'golden ages', but only edited extracts became available to the public. With the transition from television to social media, we are witnessing an intensification of this phenomenon. A new media culture is emerging with social media but it is not replacing entirely the old media culture since the old media has found new uses and ways to adapt to new demands (Parikka, 2012). Hence, it is correct to think of a new rhetorical culture emerging with social media, and yet we would be too quick to discard old media and their continuing roles in rhetoric.
Since the 2016 Brexit referendum and Trump's election, there has been a proliferation of articles about the use of social media in political campaigns. This manifests a growing awareness of the impact of new technologies on the rise of the far-right or the foreign influence in elections for instance. This diagnosis by investigative journalists is presented as a shifting political culture that feeds off the use of rhetoric and the contagion of reactions. For example, there has been widespread coverage of how pro-Brexit and Trump campaign groups used Facebook to microtarget swing voters in key regions (Pramuk, 2017). More recently, Facebook executive Andrew Bosworth published an internal memo in which he defended Facebook's policy to exempt politicians' pages and their political advertisement campaigns from many of Facebook rules, particularly in terms of factchecking and hate speech. Bosworth claimed that Trump 'got elected because he ran the single best digital ad campaign I've ever seen from any advertiser. Period' (Bosworth in Roose et al., 2020). Similarly, the director of the UK's official Vote Leave campaign for the Brexit referendum boasted about its digital savviness by asserting that it was 'the first campaign in the UK to put almost all [of its] money into digital communication' (Moore, 2018: xii-xiii). Microtargeting has also been associated to a growing concern regarding online campaign. This practice allows campaigns to use social media platforms to tailor their advertisements to small segments of the populations, down to specific neighbourhoods. Facebook has recently taken the decision to increase smallest possible targetable segment from 100 to a 'few thousand'; this will make sure that small sections of the population, like specific streets or small towns, are not targeted with different messages (Hern, 2019). Twitter and Google went much further than Facebook by banning all political advertisers from targeting voters based on political affiliations (but they can continue to use age, gender and location), a move that Facebook continues to resist despite growing pressures.
Even before the restrictions on microtargeting by both governments and social media, it is important to note that their magnitude is largely exaggerated (Bump, 2018). Given the relative novelty of these new technologies (10-15 years), many academics and journalists alike are wary of the unknowns, particularly of the long-term impact of social media on public discourse as well as possible changes in the balance of power -journalists and intellectuals having less power while private individuals, engineers, and special advisers gaining a much greater prominence. This is partly due to the lack of transparency about the data collection and the inner workings of algorithms (Pasquale, 2016) but also the increasing scepticism and cynicism expressed by news consumers. For instance, almost half of the respondents said that they admit sharing inaccurate or false news, in a survey on news sharing in the United Kingdom in 2018 (Chadwick and Vaccari, 2018).
It is tempting to think of social media (Facebook, YouTube, and Twitter mainly) as replacing other media such as newspapers, radio and television, but this is not how the ecology of media functions. New media do not entirely replace older media, but they function cumulatively, and speaking of 'hybrid media' helps to account for the interactions between older and newer media logics (Chadwick, 2017). Newspapers and radio have transformed their ways of publishing their content, by adopting digital publishing and creating new ways of making their audience faithful (through subscription, automatic downloads of podcasts, newsletters, etc.). Tensions exist in this hybrid media system; social media platforms need legacy media to produce content to be shared and legacy media develop strategies adapt to the new media ecology. Finally, algorithms operate both in older and newer media since legacy media has adapted to and integrated the logics of newer media (Chadwick, 2017).
In its long history, rhetoric too has adapted to new modes of communication. It is an ancient discipline with a long tradition, it can be a communicative technique or skill learned in order to persuade, but it can also mean more generally the persuasive elements of discourse. In short, it denotes both a 'mode of enquiry and an object of that enquiry' (Martin, 2013: 2). By unpacking this dual nature, rhetorical studies have shown that rhetoric is both the particular and the general, both an intrinsically specialised practice but also utterly banal and widely used in everyday speech. With the arrival of platforms such as Facebook or Twitter, citizens have changed the way they communicate about politics to each other (Settle, 2018: 14-16), making it particularly interesting for rhetorical analysis to examine the transformation of speech on social media. Posting on social media non-political content (about food or other lifestyle choices for instance) can signal certain political allegiance (veganism or anti-LGBT conservatism for instance) and seeking social feedback or commentary (Settle, 2018: 120-122).
Speech for rhetoric is a vantage point of political ideology and doctrine. The language used records the strategies and motives of political actors: 'Through a speech, we gain access not merely to the thoughts of an individual but to the more general ideological assemblages at work across a party or governmental organisation' (Finlayson and Martin, 2008: 449). These speeches are produced by institutions and made in the matrix of relations between a multitude of actors. This is where the materialist aspect of rhetorical analysis resides: it provides a critique of the ideological apparatuses that make up party politics' flows of communication. Rhetorical analysis examines how propositions are packaged and ultimately presented to an audience. Between politicians and their audience exists a complex social and emotional fabric, with the publicity of the event, the individual inclinations of the speechwriters, and the codification of the rhetorical language itself being some of the elements that shape it. In order to clarify the elements at play in a speech intervention, Martin (2015: 34-35) has provided a useful outline of its three distinct moments that combine structural and agential elements: (1) the rhetorical context, (2) the rhetorical argument, and (3) the rhetorical effects. By focusing on the distribution of speeches and the four conditionalities (programmed speech, verticalisation, digital bias, rhetorical machine learning), I want to revise this theoretical framework of rhetorical analysis used to identify a speech intervention. In addition to this linear (and Aristotelian) 3-stage segmentation of the speech moment, I suggest adding a fourth element that I will examine at some length in the next two sections: (4) the distribution of the speeches. The core of the article will turn to the four conditionalities that organises the distribution of speeches. First it is important to recognise the role that intermediaries operate in this framework: algorithms are the main intermediaries considered here. Intermediaries attempt to link media content (that includes political speeches) to personalised groups and orators, but they are not entirely neutral agents operating freely outside of the networks of communication. Our encounters with the short clips of political speeches on social media are recommended by social media platforms not according to editorial or ideological decisions but to keep users on the platform for longer (Helberger, 2018: 161). I will show how the role of intermediaries in delivering speeches has some profound methodological consequences for rhetorical criticism, political communication and persuasion. I argue that while there is a danger of overstating the role of algorithms, it would equally be a mistake to overlook their agency in the fragmentation of the existing rhetorical culture.

Platforms are not a public sphere
As I noted earlier, the rhetorical strategies of politicians are embedded in particular institutions. This becomes problematic when the methodology of rhetorical analysis is used to look at rhetoric on the Web, since blogs and social media cannot be straightforwardly conceived as a public sphere, as Jodi Dean rightly noted in 2003. This is my first methodological point in this article: when analysts examine the situation of speech in social media, this should not be treated as a public sphere, an open market of ideas entirely free flowing and entirely unregulated or as a levelled playing field. Political analysts are too quick at assuming that social media are public spheres or argue for the reform of social media as rational public spheres (Bessant, 2014;Finlayson, 2019). Dean had already argued about this in her influential essay, 'Why the Net is not a Public Sphere' -written well before the invention of Web 2.0 and the so-called 'Twitter/Facebook revolutions', as well as before Brexit, Trump, and the Cambridge Analytica scandal. She attempted to theorise the Web for political action, considering how it could be used for revolutionary as well as for authoritarian, reactionary, and neoliberal purposes. She writes, 'the Web is a particularly powerful form of zero institution insofar as its basic elements seem a paradoxical combination of singularity and collectivity, collision and convergence' (Dean, 2003: 106).
Since the Web is not designed as a political or even social space, Dean can call it a 'zero institution' whose representations differ as widely as the participants defining it. In a sense it can be the platform is where all institutions operate, but that would be to conceive of it as a metalanguage rather than as a collection of networks and relations that work as processes (amassing more and more data, making new connections, etc.). The question remains: what are platforms if they are not a public sphere? They are a technological space that presents certain affordances: classifying information and selecting what to show and what to hide to users. But they are not non-human institutions acting outside of any human leverage, on the contrary they only work in relation to human action. Algorithms perform very precise actions (association or correlation, classification, ordering, recommending mostly) that alter the sayable and the unsayable, the visible, and the invisible. These affordances do not follow a deliberative mode of government but are derived both from the design and the usage of the platforms, they both enable and constrain digital engagement. For instance, computer scientist Valentin Kassarnig created an artificial intelligence (AI) machine called 'Political Speech Generation' in 2016 to produce political speech for either the Republican or the Democratic parties (based on a large transcript dataset of 4000 speeches from 53 US congressional debates) (Kassarnig, 2016). While it is far from certain that politicians will start using Kassarnig's rather limited tool for speech generation, it helps to illustrate how data scientists increasingly work with politicians and their teams to examine large datasets of previous and contemporary rhetoric. This practice raises questions about politics and ethics -not simply that politicians are no longer the authors of their speeches or that the jobs of speechwriters are being automated and made redundant, but questions about the responsibility of the political decisions and rhetorical effects.

Programmed speech
It is by turning to the burgeoning debate of critical algorithms studies that rhetoric as political enquiry can start accounting for the entanglement of political life and media, and consequently the relation between rhetoric and algorithms. In our daily life, platforms recommend and filter information, products, places, experiences, and so forth, they do so using algorithms. While in the previous section I discussed platforms, I now turn more specifically to algorithms. In an important study, Taina Bucher (2018) examines what she calls 'programmed sociality'. Bucher (2018: 4) takes the example of friendship and the Facebook 'news feed', and explains how social relations are increasingly induced, augmented, supported, and produced by software. Through a detailed analysis of each version of the news feed, she finds that different groups of friends show up on the news feed depending on the version of the algorithm and the priorities inscribed within it at any given moment. Most people know that their Facebook news feed does not show every single update from all their friends or pages they follow and yet the vast influence of these parameters are largely left uncontested: 'Of the 1,500+ stories a person might see whenever they log onto Facebook, News Feed displays approximately 300' (Boland, 2014;also in Bucher, 2018: 86). The Facebook algorithm (as it has come to be known even though it is made of a multiplicity of algorithms that are constantly changing) 'need[s] to be understood as [a] powerful gatekeeper, playing an important role in deciding who gets to be seen and heard and whose voices are considered less important' (Bucher, 2018: 8). I want to extend this idea of 'programmed sociality' to political rhetoric and call the technological situatedness of rhetoric, 'programmed speech'. Programmed speeches are speeches that are brought forth by algorithms, they are mediated, organised, filtered, and ranked. The role of editors is slowly being eroded, and speeches are increasingly distributed on social media platforms using automated criteria -programma in Latin means literally before (pro-) it is written (-gramma). Speeches are therefore shaped and organised before they are written or made public.
To fully develop this notion, we need to say a little more about what an algorithm is as well as the potential agency an algorithm might have in rhetorical situations. Simply put, an algorithm is a mathematical formula that calculates output based on a set of procedures. These procedures 'name both a problem and the steps by which it should be solved' (Gillespie, 2014: 167). Algorithms are often compared to recipes to highlight the step-by-step approach. These definitions miss the relational dimension of algorithms and the social impacts on human behaviour. As the history of technology teaches us, every technical innovation brings with it its own consequences for human culture and knowledge -which rhetoric of course belongs to. Philosophers such as Martin Heidegger, Gilbert Simondon, and Bernard Stiegler have demonstrated that to conceive technology (and in our case, social media) in a utilitarian way -as a mere tool or as a means to an end -is a pre-industrial conception of technics and technology (Simondon, 2014: 320). Social media and algorithms are not simply new tools for orators to get their ideas across, but they compose with a new symbolic, political, and living milieu. We have seen this earlier when we discussed the role played by intermediaries. Central to critical algorithms studies is the claim that algorithms are neither good nor bad in themselves (and in our case for rhetoric), and we need to ask instead how these algorithms should be used to build a fairer, more inclusive and more environmentally friendly society. Algorithms are composed of values and assumptions, found in the training data and the assignment of probability weightings for instance, but these constantly evolve with the human-human relations as well as human-algorithm relations (Amoore, 2020: 6). Rhetoric too is being arranged by algorithms, and orators are increasingly aware that the common sense that they aim to contribute to is partly programmed by algorithms. To be clear, algorithms functioning on social media platforms are not a mere means, they are not neutral or simple intermediaries, but should be considered acts, operations, or events (Bucher, 2018: 49-59). They are contributors to speech events.
Simondon (2014) regretted that for too long the opposition between culture and technics has prevailed in social sciences and in the study of politics, and rhetorical analysis needs to bridge this gap. A political speech should no longer be examined independently from its platform that publishes, shares, and ranks it according to ever-evolving criteria determined by algorithms. Algorithms are processual, they are made of adjustments of weights, parameters, and threshold to control and tolerate errors (Amoore, 2020: 48). In fact, as Bucher (2018: 55) insists we should not be asking 'what is an algorithm?' but 'when is an algorithm?' since 'algorithms only matter sometimes'. To then fine-tune my initial question, what we should be asking is 'when is the algorithm at work in rhetoric?'. This one of the main methodological lessons that we can learn when studying the relation between rhetoric and algorithms.

Speaking to algorithms
When studying the relation between algorithms and rhetoric, we need to distinguish between different conditionalities at play: (1) Speech content is increasingly shaped by the digital forms of platforms; (2) while platforms promise to break down the hierarchies of speeches, I will examine how algorithms 'verticalise' political communication; (3) rhetorical analysis needs to account for the digital biases; and (4) finally, I will turn to future of algorithms and the rhetorical machine learning. Alan Finlayson (2019 has examined the practice of very successful YouTubers who use the video platform to publish pro-far-right speeches and build huge communities (hundreds of thousands, sometimes even millions of followers). These online celebrities are using politics and the so-called 'culture wars' to make a living, becoming 'rhetorical entrepreneurs' (through donations on websites like Patreon). The YouTube videos studied by Finlayson's rhetorical criticism are made to be consumed individually in solitude, usually on mobile phones, and the producers have access to the analytics and the users' data to adapt their content. Examining the elements used by rhetorical entrepreneurs, Finlayson (2020) notes certain commonalities in the speech content: they use a domestic space (bedroom), they talk about their emotions and opinions, they build fan communities, they confess their 'conversion narratives', and their creations respond to users' comments left under the video.

Programmed speech content
In the last 2 years, the conversation in the media has turned to whether social media promotes the far-right (Roache, 2019) and whether these far-right ideologues should be banned from social media. Some platforms have started to move in this direction (Broderick, 2019). In the same vein, the problem highlighted by Finlayson is the following: what new rhetorical styles do far-right digital ideologues produce to dominate YouTube? And its corollary is the following: what rhetorical lessons can be learned by left-wing ideologues? Part of his analysis found that social media is more conducive to far-right rhetorical entrepreneurs than left activists due to the level of engagement and the type of content that is produced. In short, Finlayson introduces a first perspective on rhetoric and social media, but he misses the role played by algorithms. To support this argument, he refers to Régis Debray's idea that socialism was linked to the print media and in particular the pamphlet-form, and that the far-right have adapted much better to the digital media form. And perhaps New Labour/Clinton's Democratic Party occupied the space in between -the television form.
Finlayson's narrative reads more like a retrospective justification of the current state of political ideological debate than an evaluation of how algorithms amplify a certain kind of politics. By adhering to this analysis, we risk following the same path as the Facebook executive Bosworth: that changing the rules, or the ownership of the platform would be deemed 'anti-democratic'. The economic model of such platforms is monopoly capitalism, as Paypal founder and Trump ally Peter Thiel notoriously and unashamedly declared, 'competition is for losers'. However, demanding regulation and new antitrust rules does not indicate an anti-democratic or nostalgic politics (Finlayson, 2019: 78) but is a call for a more democratic one. Furthermore, we cannot accuse critics of algorithmic governance of being 'guided by an ideal version of the public sphere' either (Finlayson, 2019: 80). As we have noted earlier, the Web cannot be deemed to be a public sphere at all since it is organised by private actors who designed their platforms according to the imperatives of capital. After all, Facebook's main innovation was the 'Like' button that allowed a recording of users' attention and merged it into an ever-larger advertising market space. Facebook became the biggest advertisement company in the world, it has trained its 2-billion users to integrate marketing and advertising logic.
In a recent study, ethnographer Jen Schradie (2019) shows different online behaviours and successes depending on the political leaning of users, and notes that left-leaning online activists are more likely to share images of demonstrations and collectives while right-leaning ones will opt for memes and well-commented articles on national issues (Schradie, 2019). This programmed speech content only works with the current algorithms that are far from static; social media platforms are always in charge of their algorithms and their constant modification. For instance, the recent loss of jobs (more than 2000 people) in BuzzFeed, HuffPost and Vice was partly due to a change in the algorithms of Facebook and Twitter, leading to the demoting of news stories from these news outlets in their users' timelines. While digital journalism was hailed as the answer to the decline of newspaper sales, they made the mistake of relying too heavily on these platforms as 'middlemen' to sustain their businesses (Bell, 2019). Ironically, Facebook and Google are now funding some local newsrooms to help rebuild journalism.

Verticalisation of speech
In this section, I want to show that by using algorithms, social media platforms amplify speech according to their own discretionary criteria. In this sense, they 'verticalise' speech or, in other words, they rank the speeches' importance based on their own criteriamainly the number of likes and comments that certain posts get but not only that. Platforms verticalise political speeches by giving them different values by sorting them and ranking them.
Considering platforms as intermediaries as I argued in the first part (against considering them as a public sphere) can be misleading however (Gillespie, 2018b). Social media are not simple intermediaries, especially when considering the role of moderators, but they publish and prioritise content based their own criteria. Clicks, likes, and shares are central to the procedure of ordering and prioritising the content presented to the users. These procedures are inscribed mathematically (although these mathematical computations are constantly revised) in the algorithms themselves. Like other media content, political speeches are turned into customised products 'that can be carefully targeted and adjusted to individual recipients' (Helberger, 2018: 154). The curated list of trending topics and the use of push notifications give a huge boost to its featured stories, which can either challenge or reinforce the mainstream consensus (Schlosberg, 2018: 208). After all, platforms are in full control of the algorithms that categorise and filter the content that is presented to users. They favour advertisement campaigns, celebrities with huge followings, and other initiatives that keep users connected.
To understand how social media platforms influence the language of politics, how it flows, how it becomes viral, and how it gets entirely ignored, it is important to remember that these are large private corporations that do not promote social interactions for their own sake. They follow a strict business model: advertisement campaigns and other initiatives that allow users to interact and stay on the platform. In 2016, for instance, in a document first leaked by The Guardian and then acknowledged by Facebook, it was revealed that the algorithms used in August 2014 by Facebook were designed to prioritise posts related to the 'ice-bucket challenge' rather than the Black Lives Matter protest in Ferguson, Missouri against racial inequalities and police brutality against black people (Tufekci, 2017: 154-162). The protest lasted a few days until the police appeared with armoured vehicles and other military equipment. While people involved in the Black Lives Matter movement were posting updates, pictures, and live videos of the event in real time, Zeynep Tufekci was struck that even though she was actively trying to find out more about the protest and the police repression from her Facebook news feed, what kept appearing were the more successful 'ice-bucket challenge' posts. This challenge became viral on social media when celebrities joined in by pouring buckets of ice-cold water on their heads to promote awareness of the rare disease amyotrophic lateral sclerosis (ALS). The Facebook posts about the Ferguson protest were not hidden due to a conspiracy orchestrated by Facebook moderators but due to the design of the algorithm and the priorities that were set. People were less likely to 'like' a video showing police brutality rather than celebrities pouring ice-cold water buckets on themselves (Tufekci, 2017: 159). This illustrates the dynamics at work in the verticalisation of speech.
It is not the natural order of rhetoric but a contingent order programmed by the current YouTube, Facebook, and Twitter algorithms. There are many other ways to visualise data than to present it as ranked lists, first devised by Google in the late 1990s (Hogan, 2015). The presentation and visualisation of speech as data in turn influences rhetoric and its capacity for persuasion. This is an issue according to Bernie Hogan since relational data (for instance speech) resists such an abstract ordering. These ranked lists are contingent because alternatives are slowly being designed: for instance, 'graph databases' that are increasingly envisioned as a coherent future for the Web (Hogan, 2015). Graphs simply mean a representation of objects (specific data) and their relationships. Data graphs then can map multiple types of complex data or speeches. While it remains to be seen how these can be implemented and how they would transform rhetorical experiences, knowing that the current verticalisation of speech can be changed through other data visualisation techniques is an important point to consider. Platforms have time and again made the promise that they are 'neutral' and 'free from subjectivity, error, or attempted influence' (Gillespie, 2014: 179), but they profoundly shape the user's experience and participation in social and political relations.

Digital bias: Why is rhetoric so white?
Another conditionality when examining the relations between rhetoric and algorithms is digital bias. One of the main reasons why algorithms produce deeply biased results when filtering information is because they reproduce the biases found in wider society. However, in the space of social media platforms, these biases manifest themselves in an even more explicit way. Designers and engineers working on algorithms are 'overwhelmingly white, overwhelmingly male, overwhelmingly educated, overwhelmingly liberal or libertarian' and therefore 'overlook minority perspectives' (Gillespie, 2018a: 12), and this is especially significant in moderation policies and practices that Gillespie studies in detail. But if we want to account for the full extent of digital bias, we need to 'move beyond the idea of biased bots' (Benjamin, 2019: 47) and reactionary engineers to understand how these biases are coded and how race itself functions as a technology. By considering race as a technology, Ruha Benjamin (2019: 40) attempts to go beyond racism as an output, the glitches and errors that social media algorithms have produced and continue to produce (wrongly labelling black people in facial recognition software for instance), by turning to racism as an input of technology, the social context and design. In fact, we need to hold both sides of the problem. Many rules of machine learning algorithms are not designed by human engineers but are generated by the machine based on data training. In this case, no one can be held responsible for designing a 'racist bot' and yet all algorithms contain ethico-political arrangements, they are far from being value-free machines. Amoore (2020: 71) locates the ethicopolitics contained within the algorithm in the 'spatial arrangement of probabilistic propositions' (the setting of edges, weightings, threshold values for instance). These result from multiple human-machine interactions rather than from the blueprint of engineers.
The intrinsic bias of digital technology has been shown by numerous studies now (Benjamin, 2019;Noble, 2018). Critically scrutinising algorithms and how they shape political speeches can help give answers to the question of why rhetoric is so white. In a sense, algorithms and rhetoric are deeply conservative, they function using the past and predictions are based on trying to reproduce what has already happened. The Netflix algorithm is ingenious in recommending certain films or TV series to users, for instance, but deeply stupid when it assumes that users will want to watch more of exactly the same. There is a distinction here to be made between giving people what they want and what they need (Bucher, 2018: 141-142). This is the difference between a technology designed as a celebrity contest and one which is designed according to responsible editorial decisions. Algorithms work at 'automating social reproduction' (Benjamin, 2019: 73), making sure that the future is not open but simply a continuation of the present state of things. In this sense, algorithms are the mirror of our society; this argument, however, goes against the individual and social progress that algorithms are said to bring. Benjamin provides a multitude of examples in which algorithms have simply reproduced existing biases. An example that particularly stands out is when in 2018 Amazon scrapped an AI recruitment tool since it systematically gave lower scores to female applicants even though the tool was designed to reduce unconscious bias in the first place (Benjamin, 2019: 140-143). 'Race 'gets inside' technology' since algorithms understands the structural racism in social and institutional relations.
'Algorithms can compensate [and should be compensating endlessly] for the bias in datasets, and companies do make choices about when to intervene in correcting certain algorithmic outcomes' (Bucher, 2018: 154). Designers and developers decide what characteristics to optimise in algorithms, though of course they are not the only ones who determine this, and therefore are 'responsible' for the agency and the eventfulness of algorithms in producing a certain kind of rhetoric. Algorithms are always 'made, maintained, and sustained by humans' (Bucher, 2018: 52), and it is by scrutinising this process that technological analyses can hold algorithms and the responsible human behaviours accountable for algorithmic rhetorical events. Thus, studying rhetoric as a technology fundamentally involves an attempt to account for the biases and the racism that it reproduces.

Predictive analytics: Rhetoric you may like
Finally, rhetorical political analysis can study how 'predictive analytics' shapes rhetoric. As we explained earlier, rhetoric is a powerful tool to characterise and frame the present conjuncture while adding original creative elements as possible ways out of crises, deadlocks, or the weakening of power. In a nutshell, rhetorical analysts often note that presentday rhetoric is tomorrow's common sense. While algorithms use a rule-based computing structure ('if . . . then'), AI or also called machine learning follows paths that are not explicitly programmed. While algorithms remain tied to a top-down programming, machine learning follows on the contrary a bottom-up logic. Can we then imagine rhetoric produced by AI neural networks? With the intrusion of algorithms and predictive analytics, algorithms can work at predicting whose, and what forms of rhetoric, can be more salient and more effective in the future. As with other machine learning algorithms, predicted future rhetoric would be worked out from the past and present training data (a multiplicity of rhetorical contexts, speeches, and rhetorical effects) to break new limits in the creativity and efficiency of speeches.
In a notorious and early article by Wired editor Chris Anderson (2008), the work of criticism and hermeneutics was discarded as increasingly redundant since with the deluge of data we have access to now, we only need to let the algorithms do the work and find the correlations. The 'raw data' that is fed into the large database of these machines is nowhere near a complete picture of reality (Rouvroy, 2018), but predictive analytics are based on the assumption that 'data = reality'. Following this logic, rhetoric is also turned into data, language into a large text that can be analysed and translated using various software. Developing a rhetorical analysis that pays attention to technical systems means not only recognising the intermediaries and the filters that words have to pass through to reach their audiences but more importantly to challenge the predictive analytics that are produced by algorithms and increasingly integrated into social institutions (health, education, police).
It is no longer science fiction to conceive a rhetorical machine learning that help speech writers develop rhetorical arguments, as shown in a recent debate around the feasibility and effectiveness of AI writing (McKee and Porter, 2018). We discussed the example of 2016 AI machine called 'Political Speech Generation' as a direct introduction of machine learning algorithms in rhetoric. Another example is Heliograf, the Washington Post's AI machine that wrote 850 articles in 2016 alone, many of them were about the presidential elections (Moses, 2017). Heliograf can therefore select snippets of Trump's speeches and include them in videos or short reports, these would of course be edited by journalists and editors. In the financial press, the use of AI is much more widespreadabout a third of the content published by Bloomberg News is produced by AI writing (Peiser, 2019). Machine learning is still in its infancy but just as major improvements in image/facial recognition technologies as well as speech recognition and translation, political speeches will soon be improved.
Rhetoric is a creative enterprise, and AI can help rhetors and speech writers find patterns as well as possible bifurcations from the large dataset of political speeches. Since the rhetorical machine learning works from below, it can also draw from the vast ocean of words published on Twitter and Facebook to flag debates and select keywords or anecdotes to include in speeches. In sum, the latest development in machine learning gives the possibility for orators and speech writers to predict the rhetoric that the audience may like at that particular moment.

Conclusion
In this article, I argued against the tendency to conceive social media as a public sphere and suggested instead to think of platforms and algorithms as intermediaries. Politicians and their teams who use social media to publish short extracts of speech moments participate in the optimisation process of algorithms. When we look more closely at the use of algorithms and its consequences for rhetoric, we find what I called conditionalities in the distribution of speech online. Increasingly the content of speeches is programmed, it is mediated, augmented, supported, and produced by social media algorithms. This programming of speech has important consequences for the forms of speeches (how they are presented and packaged) but also for the content. Algorithms are a new audience for orators.
Platforms operate different functions and act on the eventness of rhetoric: they moderate content and have effectively taken on roles that they themselves did not foresee. 'setters of norms, interpreters of laws, arbiters of taste, adjudicators of disputes, and enforcers of whatever rules they choose to establish . . . rehears [ing] centuries-old debates about the proper boundaries of public expression' (Gillespie, 2018a: 5-6). Platforms also recommend content through their news feed and trending/suggested lists, as well as promoting specific agendas with featured content and front-page offerings. It is well documented by media scholars that algorithms on social media platforms are designed to favour highly marketable content (brands, celebrities, travel, etc.), since these are advertising spaces rather than public spheres. This was the second conditionality discussed, the verticalisation of speech: digital media is not as open and democratic as it is often assumed, and social media algorithms participate actively in promoting certain media content. New research has shown that in advertisers in the 'news and politics' category constantly change the content of their ads across users, across attributes, and across time to nudge the behaviour of platform users (Andreou et al., 2019: 14).
In becoming a new audience for orators, algorithms intervene in both the form and the content of rhetoric and co-create rhetorical situations. By critically engaging with the recent work of Finlayson (2019Finlayson ( , 2020 on the far-right rhetorical entrepreneurs on YouTube, I argued that programmed speech content is indeed changing; when we examine their texts we need to confront them with the processes of verticalisation of speech and contest the myth of platform neutrality. Algorithms can never be freed of biases and errors (Amoore, 2020: 74-75) since they work with assumptions and values. Digital bias is the third conditionality when examining the distribution of speeches on social media. Race, class, gender, age, and disability are important inputs for algorithms, their codification often reproduce social and racial injustice. The problem of digital bias is not just with the technology but social relations themselves, algorithms simply perpetuate by other means the existing forms of racism. Following Benjamin (2019), I argued that we can address the question 'why is rhetoric so white?' by tackling the (racist) input of algorithms rather than simply focusing on the output. Finally, I turned to the final conditionality: the growing use of machine learning in rhetoric. Based on past and present data, the rhetorical machine learning will produce the rhetoric of the future and the predicted common sense.
In sum, the future of rhetorical political analysis needs to integrate a technological analysis that accounts for all the recent changes of the relation between orators and audience. Speechwriters and politicians are hiring AI companies to help with campaigning as well as political action (Pegg and Evans, 2020). There are some misconceptions about the use of data in politics: data is not simply used to inform the design and implementation of policies (in health, criminal justice or education for instance), but it informs all levels of politics, including rhetoric. Data science is not used independently from ideological and rhetorical interventions but will increasingly compose with them.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.