Platform capitalism and cloud infrastructure: Theorizing a hyper-scalable computing regime

There has been an explosion of scholarship on platform capitalism, with scholars identifying emergent labor practices, organizational forms, and business models. There is broad agreement that successful platform companies quickly dominate their markets, and winner-takes-all scenarios are common. However, market domination should not only be viewed as a condition but also as a process that is defined by specific drivers and practices. With regard to rapid expansion, much is said about network effects and data-intensive business models that are fueled by speculative logics as well as weak regulatory mechanisms. I advance the discussion on expansion and hyper-scalability by focusing on the transformation in underlying computing arrangements that shape the growth of platform-based companies. This article establishes cloud computing arrangements as setting the foundational sociotechnical infrastructure that drives rapid expansion.


Introduction
Cloud-based computing arrangements have driven the recent expansion of platform economies, with cloud computing now viewed as a utility and critical social infrastructure (Amoore, 2018;Kennedy, 2018;Kenney and Zysman 2016;Mosco, 2014). Despite the growing significance of cloud computing systems as social and corporate infrastructure-and the swiftness with which it has become a vital and distinctly 21st-century infrastructure-it has yet to receive much scholarly attention. Industry definitions of cloud computing emphasize it as a method to configure the on-demand usage of computing resources (Mell and Grance, 2011;NASSCOM, 2016). However, industry definitions do little on their own to demonstrate the implications or anatomy of cloud-based computing regimes. On the other hand, academic scholarship often treats cloud computing as a rhetorical and ideological phenomenon, relegating "cloud" to the ironic confines of scare quotes. When it is referenced unironically, it is used loosely, as a synonym of "platforms" or "internet" (Kushida et al. 2015).
Although cloud computing remains shrouded, both socially and in scholarship-the phenomena it supports are emphatically tangible and impactful. Cloud-based digital infrastructure has enabled platform companies from Zoom to Airbnb to operate and expand, driving new forms of work and organization. To give just one example: the abrupt shift toward decentralized, home-based remote work within white-collar occupations during the Covid-19 pandemic, has demanded new methods of handling information and communication flows within and between employer organizations. To rapidly scale-up remote communication, computing, and data storage capacity, firms and other institutions have relied on external providers of hyper-scalable information technology (IT) resources.
The explosive growth of Zoom-the video communications company-is a clear example of how cloud computing arrangements reconfigure the dynamics of scale. Prior to March 2020, Zoom was a little-known company. However, over the first three months of the pandemic, its customer-base grew by 224% and by 2021 it had 300 million daily meeting participants. To support this surge, the company managed an enormous increase in computing capacity in a very short span of time. It did so by using a hybrid mix of external, on-demand cloud computing services and its own internal cloud environment.
In parallel to the growth of platform companies (such as Zoom, Uber, and Airbnb), the adoption of cloud-based computing among traditional corporations has also grown. My research in the global information technology industry has found that back-office IT workforces are helping large client firms-the likes of Shell, Sony, and Johnson & Johnson-to migrate their internal IT systems toward new cloud providers. The elasticity of cloud computing that enables new logics of scaling remains under theorized as an infrastructural force of 21stcentury corporate expansion.
In this article, I begin by extracting explanatory claims from existing literature that focus on the drivers, enabling conditions, and structuring factors shaping the hyper-scalability of platforms, and then set the context for cloud computing arrangements as a driving force in their expansion. I then offer an empirical analysis of cloud computing arrangements based on interviews with software workers and IT managers who work closely with cloud technologies within the global IT industry, as well as analysis of industry sources (reports, blogs, etc.). Many of the interviews were conducted with employees in India. Historically, software companies in India maintain the in-house IT systems of American and European firms (Aneesh, 2006;Upadhya, 2016) and are now recruited to platformize these systems (Narayan, 2022a). With cloud computing introducing new business and organizational models and altering existing modes of IT consumption, I explicate different dimensions of cloud infrastructure, revealing how it enables asset-light, rapid corporate expansion.
The article is structured as follows. First, I review existing explanations for shifts in the operation of scale under what Srnicek (2017) calls platform capitalism, and I then briefly introduce my methods. Next, I conceptually distinguish between three interlocking dimensions of the cloud-led computing paradigm: (1) Hardware virtualization, (2) on-demand IT delivery, and (3) web-enabled modularization. The concluding section synthesizes the empirical analysis on cloud computing and situates it in broader debates, drawing out a conceptual difference between infrastructural conditions and second order business strategies.

Theorizing platform expansion
Platform companies are unique in their ability to quickly grab market share in winner-takes-all markets (Sadowski, 2020;Srnicek, 2017). Commentators of platform-driven expansion have observed a disjuncture between the massive scale of platform companies on the one hand and their relatively small, fixed capital investments and software workforces on the other (Davis, 2016;McAfee and Brynjolfsson, 2017;Parker et al., 2016). Davis (2016) uses the term "pop up companies" to refer to capitalist organizations that scale up their operations without vast physical infrastructure, employee bases, and fixed investments. Many platform companies are "asset light" entities when compared to their traditional traditional brick and mortar counter parts (e.g., Airbnb vs Hilton), which are asset-heavy, compelled as they are to make enormous fixed capital investments (Davis, 2016;Haskel and Westlake, 2018). Further, there is now often a disjuncture between the number of platform users and number of direct platform employees. For example, Zoom employs 4422 people, while supporting 300 million meeting participants every day. Facebook employs 71,000 people, but 3.52 billion people use its products. Davis (2016) also comments on the disjuncture between the staggering market capitalization of the dominant platform companies and their relatively meager profits.
There is a need for clear explanations that account for these patterns. Srnicek (2017) along with others (Peck and Phillips, 2020;Rahman and Thelen, 2019) offers important analytical factors that explain the rise and expansion of the platform form. Based on a still-emerging literature, I first discuss the key factors that are driving the rapid expansion of platforms and platform companies. I then make a case for serious engagement with the cloud computing industry as a fundamental precursor and driver of the 'platform economy', a term first used by Kenney and Zysman (2016).

Network effects
Growth in platform scale is associated with network effects, rather than with traditional forms of investment and financial planning on the part of the firm (Peck and Phillips, 2020;Rahman and Thelen, 2019). Here, the value of the platform expands as a direct result of the growth in the number of platform users (Hein et al., 2020;Kapoor, 2018;Kapoor et al., 2021). Crucially, adding new users does not necessarily entail a large and proportional increase in the input costs on the side of the platform company. Rather, increases in usage come at relatively little cost to the owner and yet can catalyze a virtuous cycle. That is, increasing usage makes the platform more appealing to new users, which leads to an increase in the platform's value. Social media platforms such as Facebook or Snapchat serve as the typical example of this sort of virtuous cycle. The same holds for other platform firms such as Google, where growth in the number of users increases the number of search queries, which in turn improves the search algorithm. This "self-reinforcing bias" (Rieder and Sire, 2014) is increasingly of interest to scholars.
Crucially, there are both direct and indirect network effects to account for (McIntyre and Srinivasan 2017). Platform companies attempt to activate direct network effects. This occurs when the value of the platform and the quality of the user's experience increases with growth in platform participation. However, platform companies can also benefit from indirect network effects where there are positive feedback loops between the number of platform users and thirdparty actors (complementors) who utilize the platform to create complementary services and products (Jacobides et al., 2018;Peck and Phillips, 2020). Many platforms depend on such complementors to attract and lock in new users (Narayan, 2022a;van der Vlist and Helmond, 2021). Indeed, some scholars have argued that without third-party players, many platforms have little value. For example-in order to expand, Amazon marketplace requires growth in third-party sellers to draw more buyers to the platform. Apple depends on a large number of app developers to create new products to be purchased via its play store. Analysis of network effects and two-sided markets helps account for rapid scaling and a monopolistic tendency (McIntyre and Srinivasan 2017; Srnicek 2017).

Data-driven expansion
Network effects are particularly influential drivers of expansion because of the special significance of user data to platform-based business models. Growth in the number of platform users increases the volume of data available to either hone the service or product, or to sell to an array of data buyers. Indeed, many argue that under platform capitalism, data has become the chief productive input (Birch and Muniesa, 2020;Couldry and Mejias, 2019;Langley and Leyshon, 2017;Sadowski, 2020;Srnicek, 2017;Zuboff, 2019). Platform companies focus on extracting granular data on platform usage and customer behavior, exploiting such behavioral insights to strengthen their business model. Take the example of Netflix. The company uses big data tools to decide on original video content to commission and also to hone a highly personalized recommendation algorithm. Driven by the behavioral patterns of its 209 million subscribers, the recommendation algorithm drives 80% of the content streamed by users. It also means that Netflix makes investment decisions by exploiting its data advantage, an advantage that allows it to gain insights into where users pause or stop watching content, or what they rewatch. Similarly, Amazon retail introduces new products that compete with third-party sellers by leveraging the enormous quantity of customer data it holds. Given the relevance of data appropriation to platform expansion, some propose that "data is the new big 'cheap thing'-the new commodity class that is emerging to reshape the world and provide a new arena for accumulation and enclosure" (Pendergrast, 2019). In sum, data is theorized as a distinct means of production that undergirds the competitive advantage and expansion of platform firms (Srnicek 2017).
Data is not just central to expansion but is simultaneously abundant, free, and intangible. This means that scaling up the operations of a platform does not necessarily imply vast outlays of capital. If labor is conceptualized as a value-generating activity, then it holds that "data" in this context results from the unpaid and free labor of users (Dyer-Witheford 2015; Scholz 2013). 1 The point here is that platform companies do not have to pay to access their chief productive input. A cheap and abundant productive input is an effective driver of expansion, because the firm is not constrained by large outlays to obtain it. To quote Srnicek, "the ability to rapidly scale many platform businesses by relying on pre-existing infrastructure and cheap marginal costs means that there are few natural limits to growth…Combined with network effects, this means that platforms can grow very big very quickly." This results in what Parker et al., (2016) term "data-driven network effects." As Peck and Phillips (2020) also note, data extraction and network effects dovetail to propel expansion. In making this observation they caution against focusing solely on the internal dynamics of the platform economy and stress the importance of macro political economy factors such as financialization and deregulation.

Speculative finance and deregulation
The expansion of the platform form has been enabled by the longer arc of financialization, which makes available speculative capital and introduces new financialized dynamics in nonfinancial industries (Krippner, 2011) such as the software sector (Shestakofsky, 2020). Over the last 20 years, the number of companies listed in the US stock market has halved. However, over this same period, the market for private equity and venture capital has grown fivefold (Kruppa and Henderson, 2019). In the United States, venture capital firms deploy around $84 billion annually (Culter, 2018). This capital seeks out investment opportunities not only in the United States but also in so-called emerging markets (Goldman and Narayan, 2021). Infusions of such speculative finance in the platform economy drive expansion and shapes managerial priorities.
Specifically, this abundance of capital enables firms to prioritize expansion over profits, as platform companies are known notoriously to do (Peck and Phillips 2020;Srnicek 2016). The availability of vast amounts of speculative capital defers profitmaking and in so doing, fuels expansion-the goal being to rapidly capture market share, an objective that supersedes the imperative to report yearly profits (Rahman and Thelen, 2019). Easy access to speculative finance breaks the traditional dependence on revenue as a means of supporting operating costs (Goldman and Narayan, 2021). Thus, external sources of speculative capital subsidize and drive rapid expansion. Indeed, Culter (2018) rightly argues that the quest for scale above all else is fundamental to the working of venture capital and is what makes it different from other forms of finance. Importantly, unlike the fickle, short-term focus of shareholders in the 1980s and 1990s (Davis, 2009), investors in public and private firms now privilege the longer term goal of market dominance through aggressive scaling over quarterly profits (Rahman and Thelen 2019).
This expansion is made possible by the twin processes of financialization and deregulation (Rahman and Thelen, 2019; Peck and Phillips 2020). The capital dumping, predatory pricing, and enhancement of scale that drives the platform economy are enabled by weakened regulatory mechanisms. This allows platform companies to circumvent zoning laws (e.g. Airbnb), labor laws (e.g. Uber), and antitrust laws (e.g. Facebook). Scale, as Shapiro (2022) argues, is also enabled and maintained through regulatory arbitrage and tax evasion. Conceptually then, the political economy of regulation-and its subversion-must be elevated as a major factor that promotes the expansion of scale.
Taken together, we can start to understand the confluence of factors that explain the rapidity by which platform firms have expanded and scaled up their operations. On the one hand, there are shaping factors that are unique to contemporary platform business models, such as network effects and data-driven competitive advantage, and on the other hand, there are historical factors to do with transformations in macro political economy (financialization and deregulation).

What of cloud computing?
Cloud computing arrangements, I argue, are foundational to platform expansion. It is easy to forget that digital platforms are essentially glorified IT systems-assemblages of hardware, software, and software labor -that take on particular market roles and organizational forms. A separate industry exists to support the growth of platforms by driving cloud adoption. Cloud computing is now an industry on its own right, with behemoths such as Amazon, Google, and Microsoft, owning the core hardware infrastructure of this computing regime, and other firms offering cloud software and services on top of it. To begin analyzing the relationship between new logics of scale and cloud-based computing we first need to frame the long-standing relationship between IT and the capitalist firm in the pre-cloud era. Three aspects of the traditional computing regime stand out.

In-house systems
In the 1970s-2000s era, computing systems were in-house systems. Organizations maintained data centers and large internal IT departments, with the locus of computing being internal to corporations (Campbell-Kelly and Garcia-Swartz, 2008ab). The corporate IT industry terms this the "on-premise" model. The 1990s ushered in a new era of outsourcing and offshoring (Aneesh 2006;Upadhya 2016;Peck, 2017;Narayan, 2017;Narayan 2022b). Crucially, it was IT services and IT labor that were sourced from counties like India-and not computing assets. Computing assets-hardware and software-continued to be maintained in house. Cloud computing, as I discuss below, represents a radical form of outsourcing. If the first wave of offshoring represents a new method of procuring just-in-time IT labor, the second represents a method of rapidly provisioning just-in-time computing assets. I argue that this development has had a transformative impact on the issue of scale.

Back-office functions
Corporate computing has typically had a back-office function, with computerization expanding across retail, banking, transportation, and public sector industries over the last 70 years (Campbell-Kelly 2003;Cortada 2004Cortada , 2012Yost 2017). The "back office" here is the figurative term for the infrastructural systems and IT work that are essential to rationalizing and automating the corporate bureaucracy. In the post-war period, consultants, along with firms in the computing industry, marketed the computer as a technology that rationalizes information-intensive corporate processes (Campbell-Kelly et al., 2013). Conversely, the front office references corporate activities that directly engage customers. The hyper-scalability of cloud infrastructure in the era of ubiquitous internet unsettles the boundary between the back office and front office. It becomes the basis of an entirely new corporate form-the platform company.

Monolithic architectures
Software programs, in terms of their architecture, were independent, stand-alone entities (Plantin et al. 2018), and could be boiled down to a single codebase. Various programming components were tightly coupled, rather than modular. This meant that making updates or changes involved altering the entire software, as a single, self-contained unit. In sharp contrast, cloud computing goes hand in hand with modular architectures, where different elements of the system can scale independently of one another. Different segments of the codebase are loosely coupled, allowing for rapid scaling.
The proceeding sections demonstrate that cloud computing arrangements disturb these three elements of the traditional regime of corporate computing.

Methods
My conceptualization of cloud computing is based on three empirical sources: 1. In-depth interviews (software programmers, managers, analysts, and consultants) 2. Ethnographic observation (meetings between IT managers) 3. Industry materials (blogs, journals, websites, reports) Between 2017 and 2020, I undertook semi-structured interviews with engineers and managers who work closely with cloud systems in the corporate IT industry. Table 1 offers details. The majority of the fieldwork was conducted in India, which is home to the largest offshore IT industry in the world, as measured by employee count and global market share (Peck, 2017;Narayan, 2017). My interview questions related to the changing nature of the corporate computing industry, the implications of these changes for their corporate customers, and their experience of working within the cloud computing paradigm. The interviews ranged between 45 minutes and 4 hours, with the average interview lasting around 2 hours. This original body of research was supplemented with interviews and ethnographic observation from a case study (undertaken in 2021-2022) of a company that helps large customers with cloud cost management.

The platformization of computing assets
Amazon is the largest cloud provider. The corporation's cloud division, Amazon Web Services (AWS) aggregates hardware and virtualization technology, the capabilities of which it leases out to external customers via the internet. The rise of AWS has been staggering, with platform companies such as Netflix, Spotify, Facebook, LinkedIn, and Airbnb all using its on-demand computing resources. Its customers also include more traditional companies such as Johnson & Johnson, BMW, and Pfizer. Today, Google and Microsoft compete with Amazon's cloud business to offer customer corporations a variety of cloud-based computing services, and in doing so interrupt existing models of corporate computing.
The following section examines the convergence of three important aspects of cloud computing, namely: virtualization, on-demand IT delivery, and web-based modularity. Contemporary digital platforms are able to scale rapidly precisely because of the convergence between these separate but interrelated techno-organizational processes.

Virtualized hardware capability
What precisely does Amazon offer? Its most used service is called Amazon Elastic Compute Cloud (EC2), a service that grants organizations quick access to "virtual machines." Instead of a single server supporting a single operating system dedicated to a specific task, Amazon uses an abstraction layer of software that partitions its physical servers. The functionality of physical hardware is thus split up, such that it is decoupled from specific software environments. That is, multiple software environments can be supported by the same physical server. This form of partitioning gives the server a unique elasticity, given it can simultaneously act as multiple machines known as virtual machines or VMs.
The concept itself is not new. Virtualization technology has its origins in the 1960s, gradually getting more sophisticated over the decades. What Amazon did was aggregate huge amounts of computing hardware by creating large-scale data centers. It then went on to offer customers myriad different types of virtual machines, referred to as virtual machine "instance types." Each instance type gives customers different combinations of processing power, storage capacity, and networking resources (see Figure 1). There now exists an entire industry of consultants and technologists who help customers decide which instance type is optimal for their various computing needs. An IT manager discussed the virtualization of hardware by saying, "With virtualization you can just add memory-without even shutting down the server. You can 'plug and play' extra capacity. Once you can have multiple virtual servers in one physical server it is an exponential move" (Kumari, 1 August, 2020). This transforms the relationship between hardware and software, greatly enhancing the autonomy of these two elements. To quote a software engineer: The entire infrastructure is a script, a configuration. You fire up a script and it does everything else. The cloud allows you to program your infrastructure, otherwise you would have had to physically get your hardware and software. Now, you can take it to such a level of abstraction that it boils down to a single script which can trigger other scripts. 2 (Shankar, January 17, 2017) Virtualization also means that computing capacity can be deployed and retired without writing off what would otherwise be a sunk cost. Another developer discussed this ability of firms to rapidly generate computing infrastructures using virtual machines: When you spin up new servers, you aren't procuring the hardware. You are accessing the virtualized layer on top of someone else's infrastructure. This is the virtualization of a computer. It lets us create and destroy at the click of a button. (Nakul, March 27, 2017).
To quote another interview, "now you can scale independently of physical architecture. The cloud providers bought, say, 50 servers and now they have 10,000 of us using them. I'm sure we are not buying physical hardware [at AWS]. They just created an abstraction layer on their physical hardware and they're renting out some resources. They rearrange and distribute resources" (John, February 21, 2022). Acting in parallel to the virtualization of hardware is the process of containerization. Here, software applications are packaged in a way that makes them neutral or agnostic to specific operating environments, further exaggerating the decoupling of hardware from software. If virtual machines represent an abstraction at the hardware level, then containerization represents an abstraction at the level of the operating system. Each virtual machine can now host a different operating system. It provides "a cheaper way of doing virtualization. In each virtual machine you can run multiple containers. Containers help manage complex interdependencies when systems are so dispersed" (Nakul, March 27, 2017).
Thus, both virtualization and containerization divide up a single physical server into smaller and smaller partitioned fragments, each of which can scale independently of the other, and each of which might now be distributed across a vast network of physical servers. Virtualization and containerization greatly exaggerate the flexibility of the underlying hardware, meaning that the different parts of what previously was a single machine have been unbundled, fragmented, and dispersed. One developer noted that he never actually knows where in the world, and on which server, his code actually runs: "That part is abstracted away," he said, "I don't have to care about that" (Jay, April 20, 2017).
How does this relate to the question of scale? When Netflix or Zoom experience a sudden surge in demand for their services, they rapidly provision more virtual machines without having to make a large purchase of physical servers. Similarly, if Netflix (for example) alters its business strategy and chooses to reduce its presence in a certain market, it does not necessarily have to write off huge sunk costs, but rather it reduces the number of virtual machines it subscribes to via Amazon.
Marco, an engineer and cloud consultant, explains his job is essentially to study clients' cloud usage and suggest ways to tweak the consumption of CPU, memory, and bandwidth by choosing the most appropriate Amazon or Google on-demand computing service. "So, you have a lot of different machines available from AWS and GCP (Google Cloud Platform). You can get a machine with a huge amount of CPU and not so big amount of memory, or vice versa. You can choose and configure." (Marco, February 16, 2022).
This form of abstraction and partitioning at an architectural level allows for resource sharing and allocation of cloud providers' hardware infrastructure among their many customers. Multi-tenancy is not new, as computing historians remind us and existed during the mainframe era (Campbell-Kelly and Garcia-Swartz, 2008; Kennedy, 2018)-but the scale at which it operates via the public internet and the steadily falling costs of computing are producing new effects. This is part of the reason why the cloud computing regime can be viewed as a utility shared by organizations and individuals alike. Virtualization represents a very important aspect of this computing regime, one that intersects with on-demand delivery mechanisms.

On demand computing: The just-in-time delivery mechanism
The relative liquification does not just occur at the level of hardware-software relations, described above. It also occurs through restructuring the organizational relations that determine how computing resources are delivered and consumed. Virtualization and cloud-based IT delivery need to be separated conceptually. As an industry commentator states, "virtualization is software that manipulates hardware, while cloud computing refers to a service that results from that manipulation" (Rivera 2018). Virtual machines, databases, development platforms, and file storage systems can all be delivered and used on demand as a pay-per-use basis over both public and private internets. Infrastructure and platform owners supply customer organization with just-in-time resources through rental or subscription models. Just like individual consumers subscribe to storage services (e.g. Google drive) or applications (e.g. Netflix) based on a monthly fee, the organizational consumers of computing resources also buy a range of software and services, depending on their shortterm needs.
Thus, cloud-based consumption is as much an organizational phenomenon as a technological one, representing an important shift in how computing assets are accessed and distributed. In fact, the trade literature defines cloud computing as a unique mechanism of (web-based) delivery, supported by distinct pricing models (i.e. pay-per-use). 3 Instead of buying and owning infrastructure, clients have the option to utilize assets by increasing or decreasing their usage in real time, with the computing and storage capacity owned by an external provider.
From the perspective of a customer organization, cloud-based IT is therefore a method of meeting IT needs that eliminates large upfront costs. Processing power, data storage, and software applications are rented, with the classic "make or buy" decision better expressed as a "rent or own" decision. For corporate customers interested in cloud offerings, computing resources are not "things" to be owned, maintained, and invested in. Rather, these are services to be accessed over the internet on demand.
To quote a developer, "the way software is distributed has changed so much. People don't get software [anymore], they just run it" (Nithin, January 29, 2018). This distinction between "getting" and "running" IT indicates the shift from IT as an internal asset, to IT as an on-demand service. The top executives of IT services companies I interviewed discussed how cloud computing grants their clients the choice to not invest in their own independent in-house systems. As a recently retired Chief Technology Officer of one of India's top IT firms said, Cloud is the aggregation of computing available on tap.
[Firms] don't have to build IT infrastructure to use IT infrastructure.
[They] don't need to worry about owning servers and systems. (Bhaskar, June 20, 2017) Another senior IT manager also brought up the issue of ownership, observing that processing power and storage "is available today with a swipe of a card on a pay-per-use model. You don't need to own data centers. You don't need huge infrastructure" (Ananth, May 14, 2017). This was echoed by an engineer: "If I need the capacity of 1000 servers for only 48 hours-without the cloud, I have to budget it for the whole year. A huge upfront cost you might use a server for three or four hours a day, but you have to procure it 24/7. Organizations did not have any other option. Now they do" (John, September 12, 2017).
The interviewees often discussed how cloud computing renders IT fluid and elastic, given it can be procured "on tap." From the standpoint of fixed capital, this is critical. This paradigm transfers fixed computing assets from the bucket of long-term capital expenditure (CapEx) to routine operating expenditure (OpEx).
The advantage of such an externalized model with flexible pricing (i.e. pay-per-use) is that it reduces unused capacity, allowing clients to increase and decrease their usage according to shortterm forecasts. This just-in-time logic, for example, is very appealing to the retail industry where there are major spikes in sales during certain periods in the year, like Christmas and Black Friday. Instead of sitting on unused capacity all year round in preparation for a single week, retailers can increase their consumption during the short window when their IT needs spike.
I had a unique opportunity to interview a senior IT manager at one of the world's largest retail chains. To quote, With the cloud you can scale up and down as you need it. Our peak usage of our applications is Thanksgiving to Christmas. We have to build our systems to that scale to take that load. [There has been] no choice but to have that capacity all year. With cloud you can scale…[T]he public cloud has ease of use; you don't need resources and people maintaining those systems (Meena, April 24, 2018).
Echoing this, another manager likened the traditional model of IT to a bank account and conversely cloud-based delivery to a credit card. With a bank account, "you [don't] spend money until you have money in the bank" (Dennis, October 29, 2021). Analogously, with the traditional model, the parameters of IT usage are determined by the infrastructure that is available within the firm. This is fixed by forecasting computing needs and making investment decisions annually. However, as with a credit card, the cloud-based model allows you to "spend as you go." Here, technology usage can change on a weekly or even daily basis. An engineer who monitors the use of AWS for a Fortune 500 company agreed: "I don't remember customers [of cloud computing], where we had a constant infrastructure that was not changing. You are always adding new resources. With the customer that we are working for, every day, we have a new thing that is launched, things are always changing. You are talking about millions of changing resources" (James, December 21, 2021). This is the promise of externalized systems delivered over the internet: rapid, scalable computing infrastructure that is available only when needed, thereby loosening the rigidities associated with long-term, fixed IT investment. The management of sunk costs is a key area of concern for firms (Clark, 1994;Clark and Wrigley, 1995). But here, server capacity and applications that are temporarily rented stand to eliminate the lump-sum costs and the slow processes of buying and implementing servers and other hardware and software products. The just-in-time model of IT consumption has immediate implications for scale and expansion. Going back to the example of Zoom, the company was able to meet the massive surge in demand during the first Covid-19 lockdowns by swiftly increasing its consumption of AWS infrastructure.

Web-based modularity
The cloud-enabled computing regime has a third dimension that is integral to the question of scale and platform expansion. Not only have computing assets been increasingly virtualized and externalized to data centers that are owned by a handful of providers, but software is now networked in new ways. Software systems in the pre-cloud era were bounded, stand-alone systems owned and managed by the customer firm. Every time a firm decided to adopt a new software package it had to first buy an expensive license, and then hire an army of technicians to install and integrate it with other software and periodically upgrade it. However, these activities are increasingly falling under the jurisdiction of the software provider rather than the customer. The provider, which hosts the application, initiates and rolls out new features and software updates.
Cloud software simplifies the implementation and upgrading of computing resources as a result of internet-led modularization. Software "lives" in the premises of the provider but "travels" to the customer organization via the internet. Cloud systems are integrated using what some refer to as boundary resources (Kapoor et al., 2021). This web-delivered software links with other applications, resulting in a do-it-yourself approach to computing infrastructure. To quote an IT manager, "Anything I want I get from a service catalogue. It's like the do-it-yourself, modular furniture that IKEA provides" (Arvind, February 11, 2018).
Software applications in the era of advanced browser technologies and web-enabled apps are neither standalone nor tightly integrated monoliths but are instead connected by application programming interfaces (APIs). APIs can be thought of as the "hooks" at the ends of software; these hooks might be exposed by the firm or developer so that other applications can easily connect with them. One programmer describes APIs as "a collection of code, a set of functions, that can be accessed by a developer to let different components talk to one another" (Prabhakar, October 23, 2017). To quote the cofounder of a startup that offers developers certain standardized, ready-to-use tools, "APIs are just a way for two services to communicate. You can write an application in a way that every piece is talking to another piece. If the API endpoint is consistent, then you can change things without disrupting the other pieces it is communicating with" (Radhika, June 19, 2017). Once discrete applications can be connected through APIs the structure of software is reformulated. As the cofounder of a startup that produces and hosts web-delivered software applications explained, We are exposing our APIs for others to hook into. I build the product in a way that makes it easy for developers to plug in. Our software opens at its end points, it offers hooks to others. They can just read the documentation and plug in. Even large companies are changing and have no choice but to open up the ecosystem. (Krishna, October 29, 2017) The new cloud paradigm converges with significant changes in the world of software development. From the perspective of application development, developers do not have to write or build software from the ground up; they rely on new web-based frameworks that enhance reusability, speed, and flexibility when it comes to the creation of cloud-enabled software. To quote the website of a popular framework for development, "Django makes it easier to build better Web apps more quickly and with less code. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel." Just like Django, many tools and frameworks are free, open-source, and are increasingly used to produce software. The cofounder of a startup that creates back-end tools for developers noted that her company "tries to abstract away as much as possible" (Radhika, June 19, 2017). Indeed, this company advertises its ability to minimize "the back-end code a developer needs to write by providing certain application programming interfaces [which provides] the infrastructure needed by a developer, to create an application within seconds." Over time, numerous ways of reducing the "cognitive overhead" of software development have emerged. Reusable components and pieces of preexisting code reduce the amount of effort and time that goes into software production and deployment. One programmer talked about how there are now numerous standardized building blocks that developers have at their disposal. These gradual developments in software creation have combined and converged with the cloud-based delivery model. From the perspective of corporate IT, APIs and standardized "building blocks" reduce labor-intensive implementation and integration costs. Now that these processes have been simplified, it is now potentially easier for a client firm to implement new software applications, as it no longer involves the laborious process of deployment and integration associated with the traditional on-premise licensing system.
In addition to the improvements in software deployment and integration, there have been important shifts in the area of software upgrades. The traditional IT regime required every firm to employ IT workers to install upgraded versions of their software. This is much like how individual consumers would buy compact discs to update their operating systems. Now users of software applications and operating systems can avail themselves of automatic updates and new releases triggered by the software owner, whether on smartphones, desktops, or tablets. From an organizational perspective, the task of upgrading and revising IT applications and systems has always represented a vast, complex, and expensive endeavor for corporations-reflecting the inescapable imperative to not just invest in fixed capital but also to continuously maintain it. That is, the formation of fixed capital is not a one-time event. Computing systems, for example, need to be updated, repaired, and changed. However, aggregation and centralized sharing via the internet can have powerful effects, further compounding the plasticity of this computing regime.
The software provider hosts the system and continually makes changes to the software-releasing new versions, bug fixes, and features-which instantly updates the software for all users. From the perspective of firms, this immediately diminishes the need for technicians to painstakingly update every on-premise system every time software is updated. The Indian employees of the offshore IT sector are all too aware of these shifts, given that customization and upgradation have historically been a major source of revenue. The head of a company that rents out cloud software said, The on-premise model had a few flaws. Each deployment was done new and afresh. Whereas in the cloud era, deployment, configuration, management bug fixing-everything is done remote[ly], everything on common infrastructure." (Vivek, February 6, 2018) Although modularity is recognized as a core facet of a platform environment (Baldwin and Woodard, 2009;Plantin et al., 2018), the implications for platform expansion need to be made explicit. In general, modular systems made up of separate units that work together enhance scalability. Such systems allow computing assets that were previously more static, to be modified and enhanced continuously rather than periodically. Automated upgrades triggered at the back end means that technology systems are always changing infrastructures, sometime even without the knowledge of the user. Infrastructures that were once comparatively fixed and stable now expand, change, and grow swiftly.

Conclusion
Scholarship in this area has largely sidestepped cloud computing arrangements and their fundamental and constitutive role in structuring platform capitalism and propelling its expansion. The absence of studies focused on cloud computing is likely due to three reasons. First, studying cloud systems involves studying corporate computing and the global IT industry; both only minor areas of empirical study. Second, critical scholarship has predominantly focused on the peer-to-peer "gig" and consumer-facing platform economy (Grabher and van Tuijl, 2020). Third, critiques of techno-evangelism seem to almost prohibit serious engagement with the anatomy of platforms and organization of cloud infrastructure. Importantly, STS and media scholars now attend to data centers as the material base of cloud computing (e.g. Hogan 2015; Taylor 2019; Monserrate, 2022). Yet, materiality does not simply amount to tangible physicality (Harvey, 1982). Cloud computing possesses a material existence that far exceeds the data center realm.
Although the technological moorings of platform economies do need to be wrested from technophilic discourses, its study cannot be abandoned altogether. Platforms, as they exist today, would simply not operate as they do without the ongoing transformation in the computing industry and reformulation of hardware-software relations. Without virtualized and distributed hardware arrangements, on-demand usage, and modularized architectures, neither labor nor commodity platforms would sit on smartphones and web browsers. A global restructuring in the composition, organization, and consumption of computing assets is fundamentally constitutive of platform capitalism, with cloud computing arrangements setting basic technical and organizational preconditions. This industry must therefore be viewed as an infrastructural sector with far-reaching consequences.
The time when data and software programs lived solely in user devices has passed (Perzanowski and Schultz, 2018). Software is not installed and updated using compact discs and other such means, but via automatic updates over the internet. Capitalist organizations do not have to buy hardware to benefit from hardware capacity. Organizations and individual users are no longer limited by the storage capacity of their machines. In other words, the locus of computing has changed (Campbell-Kelly and D Garcia-Swartz, 2008ab), with software applications, user data, and hardware now increasingly hosted by external providers. At one level, this can be regarded as a simply extension of a hyper-outsourced economic model.
Asset externalization is indeed a critical dimension of the cloud-based regime. The current paradigm of externalization in computing can be likened to another key moment in the history of infrastructure-electrification via the power grid when the generation of electricity is outsourced to centralized service providers. On-demand distribution of electricity gave rise to a new utility. Analogously, cloud providers centralize a wide range of resources, bundle them in new ways, and rent them as a service out via public and private internets. As with electrification, the material outcomes of this infrastructural shift are enormous. These shifts undergird the contemporary platform and its organizational ecosystems, catalyzing new organizational forms (Davis, 2016), competitive and monopolistic logics (Narayan, 2022a), market strategies (Shapiro, 2020), labor conditions (van Doorn and Vijay, 2021), and urban environments (Nowak, 2021). Metaphorically speaking, cloud computing arrangements can be thought of as the plumbing and electrical networks upon which platforms are built. In arguing for the fundamentally infrastructural character of cloud-based IT systems, I respond to a recent call for explicit dialogue between platform studies and infrastructure studies, with scholars noting both the infrastructuralization of platforms and the platformization of infrastructure (Langlois and Elmer, 2019;Plantin et al., 2018;Plantin and Punathambekar, 2019).
That being said, this article suggests that while the radical outsourcing of computing infrastructure to new providers is an important dimension of the story, it is not the only lens with which to regard cloud-driven transformations. 4 It would be misleading to portray this as solely a own-or-rent dilemma, in which in-house computing is simply replaced by outsourced computing. This oversimplifies the rupture. Indeed, I hope I have shown that this is more than a simple "lift-and-shift" process whereby customer firms move IT systems from in-house to external data centers. Peck (2017) rightly argues that outsourcing is not a single event, but an adaptive ongoing process of automation, redesign, decomposition, and recomposition (p. 43). Here, Peck is discussing the outsourcing of IT labor; the process by which global north firms, in the 1990s, began offshoring IT work to countries like India. However, with cloud-based computing, the structure, anatomy, and architecture of hardware and software are transformed-not merely transferred or outsourced. Computing systems are recomposed to the extent that it would be a mistake to think of it as just another outsourced model. Unlike with the offshoring and outsourcing of IT work, the effects of the cloud-led reformulation of IT assets reverberate across numerous disparate sectors of the global economy, affecting business-to-business and consumer sectors alike. These effects go far beyond the limits of specific geographies or specific IT industries, and they usher in new modes and strategies of accumulation and corporate expansion.
Rahman and Thelen (2019) write, "Platform firms are important not for their ubiquity, or because all firms will look like them, but because they represent the leading edge of emerging business models and increasingly set the terms of the markets they enter" (p. 179). Cloud computing has played a major role in introducing new conditions for value extraction in a number of sectors, due to the flood of new entrants utilizing this model to build hyper-scalable platforms, which has a marketaltering impact (Kenney et al., 2015). Moreover, large incumbent firms-traditionally the massive consumers of in-house IT-are also experimenting with cloud-based models via various hybrid methods. This impetus toward cloud computing is reflected in the fact that all major IT companies (e.g. Accenture, Infosys, IBM, SAP, Oracle) have restructured their business strategies and organizations in an urgent effort to enter the cloud computing market, the epicenter of which is dominated by Amazon, Google, and Microsoft. Indeed, my research finds that IT companies are now actively encouraging their customers to favor cloud IT over "on premise" IT, with large enterprises now attempting to integrate in-house computing models with cloud-driven models. The biggest concern here is the cost, complexity, and security problems associated with the migration of legacy systems and cloud adoption.
What do cloud-driven shifts mean for broader theorizations of organizational and business model expansion in the platform age? How should cloud computing be viewed vis-à-vis other driving factors, namely network effects, big data, financialization, and deregulation? cloud computing arrangements are a clear precondition to platform expansion. This demands a theorization at a high level of analytic abstraction, given that this sets the foundational sociotechnical condition for all contemporary platforms. Network effects and big data capture are highly relevant secondary factors of expansion for many platform companies, particularly those that are consumer-facing, where value is generated by users and captured by the platform firm. By "secondary" I do not mean these are any less important, but that these are not infrastructural factors. The extraction, manipulation, and storage of big data requires a hyper-elastic infrastructure afforded by cloud-based computing arrangements. And network effects are particularly strong drivers of expansion when platforms rely on third parties to build complementary products and services (Jacobides et al., 2018;Parker et al., 2016). APIs and other boundary resources represent the infrastructural architecture that introduces the modularity which strengthens network effects. Platforms, regardless of their specific industries, are built on the convergence between web-based modular architectures, virtualization, and just-in-time computing mechanisms. Thus "cloud platforms" are not only a type of platform (Srnicek, 2017), but also a computing paradigm that is constitutive of platform-based organizational forms and business models. Therefore, hyper-scalability needs to be studied not only at the level of the business model but also at the level of infrastructure.
Fordist and Post-Fordist firms achieved market expansion through long-term investment, vertical and/or horizontal integration, R&D investment, and diverse cost-cutting strategies. Core to today's platform-based expansion is the scaling up of underlying cloud infrastructure to support growth in usage and then the exploiting of second-order strategies. Second-order factors range from big data extraction, infusions of venture capital, manipulating platform design, exploiting the asymmetries between the platform and labor, making acquisitions, and so on. My objective here is to distinguish between business strategies associated with platform expansion, and the specific foundational and expansionary possibilities generated by cloud infrastructure. Parker et al. (2016) discuss how traditional corporations that are bound by large fixed and sunk costs run up against new platform companies that have neither. Fixed capital costs, as economic geographers have long theorized, discipline, constrain, and tether firm strategy and industrial organization (Clark and Wrigley, 1995;Harvey, 1982;Schoenberger, 1997). By contrast, cloud computing in the age of ubiquitous internet has an untethering effect. Market expansion does not rely on long-term fixed investments, but on scaling up on-demand virtual machines and attendant software infrastructure, enabling asset-light business models based on two-sided network effects, data-intensive algorithms, gig labor, and so on. Velkova (2022) and Brodie and Velkova (2021)'s work on the transience and impermanence of data centers-the hardware layer of cloud computing-is instructive here. In tracing the swiftness by which data centers are dismantled or relocated, they further reveal the liquification of computing fixed capital.
Although cloud computing arrangements have been driving platform expansion for 15-odd years, the ongoing pandemic serves as a flashpoint, with some platforms enacting massive scaling-up events in response. This is true not only for communications platforms like Zoom but also for meal delivery and e-commerce platforms which similarly saw a sudden and dramatic surge in usage (Shapiro, 2022). The cloud-based computing paradigm certainly subsidizes entrepreneurship and encourages a flood of new platform entrants in diverse markets, but equally important is the way it allows for sudden, rapid expansion, with minimal marginal cost.
There remains much to learn about how a new infrastructural regime is proliferating through market struggles and particular organizational practices. Adopting cloud infrastructure demands a total rehaul of financial planning and organizational practices around corporate investment into technology. Supporting this change are new industry forums and the aggressive market-making activities of the cloud computing heavyweights-Amazon, Google, and Microsoft. From a research standpoint, we need to pivot from framing these mega firms solely as mass-market, consumer companies (in the respective areas of e-commerce, web search, and operating systems). These are conglomerates that have major business-facing arms, resulting in broad-based industrial impacts in diverse sectors. The fact is, more than 50% of Amazon's operating profit now comes from its cloud business. We know very little about cloud providers and their practices of expansion. Research into the market-making activities of the cloud lobby as well as the messy, nonlinear process of cloud adoption on part of the customer firms is sorely needed. Overall, this article asserts the significance of cloud-based computing vis-à-vis contemporary corporate capitalism and invites further inquiry into this theme.