Police use of facial recognition technology: The potential for engaging the public through co-constructed policy-making

In the face of rapid technological development of investigative technologies, broader and more meaningful public engagement in policy-making is paramount. In this article, we identify police procurement and use of facial recognition technology (FRT) as a key example of the need for public input to avoid undermining trust in law enforcement. Specifically, public engagement should be incorporated into police decisions regarding the acquisition, use, and assessment of the effectiveness of FRT, via an oversight framework that incorporates citizen stakeholders. Genuine public engagement requires sufficient and accurate information to be openly available at the outset, and the public must be able to dialogue and discuss their perspectives and ideas with others. The approach outlined in this article could serve as a model for addressing policy development barriers that often arise in relation to privacy invasive technologies and their uses by police.


Introduction
In early 2020, a controversy erupted in Canada when it was revealed that several police services had accepted a free trial of a facial recognition product from US-based company Clearview AI. This software uses a large database containing millions of images scraped from the internet and social media sites to identify people without their knowledge or consent (Browne, 2020). The controversy highlighted that decisions about the acquisition, implementation, and use of facial recognition technology (FRT) by police services on the municipal, provincial, and federal level are singular, unregulated/self-regulated, and covert (CCLA, 2020). Providing police services with this type of access to FRT data comes with a long list of privacy and discrimination concerns and risks (e.g. the targeting of racialized communities, inaccuracy and bias, and expansive scope of information collected for the reference database). Without proper oversight, regulation, and collaborative efforts to identify and address these legal and ethical issues, FRT can quickly become a tool used to reinforce harmful power dynamics and social inequalities that have detrimental impacts on public trust and acceptance of police (Bradford et al., 2020;Bromberg et al., 2020). As such, several civil liberties organizations, privacy commissioners, and city councils (e.g. the Office of the Privacy Commissioner of Canada [OPC], European Data Protection Board [EDPB], Boston City Council) have formally requested a de facto moratorium on FRT use by police services until the impacts of such technology use can be assessed, and proper guidance frameworks can be provided (OPC, 2020). Further, private technology corporations such as Axon and IBM have agreed to pause the development and distribution of FRT to law enforcement organizations across North America (Crawford, 2019;Goodfield, 2020;Lunter, 2020).
Criticisms of decisions to adopt digital technologies such as FRT have been raised throughout their rapid adoption and implementation by police, including leveraging privacy against security, limited empirical evidence of increased investigative effectiveness, low accuracy rates, limited oversight frameworks, potential misuse of the technology, and lack of acknowledgment of the sociopolitical implications (Gates, 2002(Gates, , 2006Introna and Wood, 2004;Lum et al., 2017). More recently, concerns have been raised about the presumption that FRT is objective and the algorithmic biases commonly associated with its development (e.g. under-representation of racialized groups in training sets, the codification and reproduction of normative conceptions of the body) (Hood, 2020;Kotsoglou and Oswald, 2020). Despite these long-standing concerns, it is only recently that the urgency of developing appropriate policies and regulation has come into public awareness.
Although similar sociopolitical concerns exist for other forms of technology in policing (Merola and Lum, 2012), in this article we use FRT as an example to illustrate a process for approaching meaningful public input in police services' technological decision-making. More specifically, we argue that meaningful public input can be accomplished in part by developing an oversight framework that genuinely incorporates public voices. As others have noted, public acceptance of digital technologies and trust in the police will remain strained (e.g. over issues of privacy and civil rights concerns) until proper regulatory practices exist for the collection, storage, management, and use of data (Crawford, 2019;Merola and Lum, 2012). In what follows, we first discuss the available literature on FRT uses as well as the current controversies surrounding the use of FRT by police in a range of countries. Next, we examine possible avenues for reducing risks associated with FRT by increasing public involvement in shaping police technology use policies and practices. Finally, we conclude by articulating the urgent need for more research on police use of FRT and other data-collecting technologies that increasingly situate people and their movements, associations, and biometrics as data points in a "surveillant assemblage" (Haggerty and Ericson, 2000).

Controversies surrounding police use of FRT
The algorithmic technology associated with biometrics has become so complex that it is often beyond the understanding of social scientists, lay people, and potentially the engineers themselves who often fail to apply a critical lens to their work (Marciano, 2019). However, FRT in the most basic sense numerically quantifies facial features into measurements that can be compared with a searchable database of such measurements to suggest or verify an identity (Galterio et al., 2018;Gates, 2006;Mann and Smith, 2017). Although using photographs and biometrics (e.g. fingerprinting) to identify people is not new (Hood, 2020), FRT augments what can be accomplished by manual comparison aiming to identify suspects.
The uses of FRT that have come under the most scrutiny and criticism are those that attempt suspect identification through processes that gather or access large numbers of faceprints against which to compare an image of a known or unidentified suspect (e.g. from an existing government database, or via real-time video surveillance) (Garvie et al., 2016;Klum et al., 2014;Norval and Prasopoulou, 2017;Slane, 2021). The range of public surveillance opportunities expand when you include images from body-worn cameras (Bowling and Iyer, 2019;Ringrose, 2019), drones (Bradford et al., 2020), and other surveillance cameras [e.g. closed-circuit television (CCTV)] (Dessimoz and Champod, 2016) and combine these with predictive policing technologies (i.e. big data) (Brayne, 2017;Ferguson, 2017;Joh, 2015), and other types of data that track people's activities (e.g. smartphones) (Slobogin, 2017).
The potential benefits of this more expanded one-to-many FRT usage include the ability to locate people of interest to police (e.g. terrorists) more easily and in real-time, which could enhance public safety (Carter, 2018;Galterio et al., 2018;Nesterova, 2020). FRT can also be used to find/identify missing or trafficked people (Carter, 2018;Galterio et al., 2018), sexually exploited children (Russell, 2020), and improve the efficiency of investigations (Carter, 2018;Hamann and Smith, 2019;Klum et al., 2014). Carter (2018: 49) suggests that FRT can "act … as a force multiplier" to extend the reach of limited police resources. The use of FRT also promises to make suspect identification more accurate (Kotsoglou and Oswald, 2020) and if combined with other data intensive investigative tools, could help curb the problematic uses of police discretion (Joh, 2015).

Concerns and potential risks of FRT use by police
Voices raising caution about the use of FRT by police have become increasingly loud, as civil liberties groups, scholars and members of the public bring forward a host of ethical and legal concerns and risks. These issues arise in many areas from the initial technological design through to postimplementation of FRT. With the increasing demand for information and rapid adoption of biometric technology, the most pressing human rights-related risks and concerns associated with FRT include accuracy rates and discrimination/bias; violations of privacy and data (mis)use; and lack of oversight and limited transparency. The following sections outline each of these key areas in detail as they pertain to how the police use FRT.

Accuracy rates and discrimination/bias
The accuracy rates of FRT have been heavily criticized by scholars, civil liberties organizations, media outlets, and the public. According to Garvie et al. (2016), despite increased interest in FRT, the technology is less accurate than other available biometrics (e.g. fingerprinting), specifically when used in real time. As it stands, the accuracy of FRT relies on several interconnected and complex technological factors including photo quality, lighting, proper thresholds to minimize false positives (FP) and false negatives (FN), camera position, training sets, and physical qualities of the individual (e.g. race, glasses, makeup) (Arigbabu et al., 2015;Dessimoz and Champod, 2016;Hood, 2020). Issues with any of these factors, or several combined, can lead to inaccuracies in FRT results. For instance, the Foto-Fanhdung Project found that daylight had a significant impact on accuracy ratings, so much so that accurate identifications rose to approximately 70% during the day and fell to approximately 10% at night (BKA, 2007). However, it should be noted that FRT does not determine whether the person in the investigative image is a definitive match to an image in the source database, but instead provides a ranked list of potential matches. Police officers, and/or other specialists, are left to set the threshold for what counts as a similarity match worthy of further investigation (Dessimoz and Champod, 2016;Hood, 2020;Kotsoglou and Oswald, 2020).
Accuracy rates of FRT are significantly impacted by algorithmic design and the composition of training sets, which can lead to lower accuracy rates for some racial groups. The National Institute of Standards and Technology, for instance, analyzed FRT data sets collected from border services and law enforcement agencies across 24 countries using the Face Recognition Vendor Test (Grother et al., 2019). Results demonstrated that FP rates were highest in West and East African and East Asian individuals, and lowest in individuals of Eastern European descent. Further, FN rates were higher in Asian and American Indian individuals, whereas African American and Eastern European individuals had the lowest FN rates. If FRT systems are designed to have better recognition rates for certain groups over others, this may lead to disproportionate scrutiny of innocent persons by police, as well as possible racial discrimination in police decision-making. For example, in Detroit, Michigan, police falsely arrested Robert Williams, a Black man with no history of involvement with police, based on an inaccurate FRT match to a drivers' license database that was in turn falsely confirmed by a flawed witness who had only viewed surveillance video of the crime. Although the charges were eventually dropped, the example demonstrates how an over-reliance on FRT can lead to injustice (Allyn, 2020). By contrast, a disproportionate number of FNs may lead to a false sense that the system is more effective than it is. Thus, accuracy is highly dependent on the quality of the data being used by the automated technology (Nesterova, 2020).
More generally, algorithms appear to be biased regardless of the technological context (Calvo et al., 2020;Hood, 2020;Marciano, 2019). For instance, it has been suggested that design decisions of technology companies may lead to inadvertent algorithmic biases (Garvie et al., 2016). Unlike traditional methods of surveillance, the inclusion of these algorithms in surveillance strategies allows for the bodily sorting of groups on a presumptive basis. This presumptive sorting of bodies based on biased algorithmic determinations of dangerousness, especially in a policing context, can result in several sociopolitical dangers (Hood, 2020;Marciano, 2019). This raises increasing concerns about how FRT intersects with other data-collecting technologies (e.g. body-worn cameras/video, drones). Yet despite these concerns, the work of police officers continues to be automated and replaced with various technologies (Bowling and Iyer, 2019;Hartzog et al., 2015). Although claims of efficiency, effectiveness, and reduced bias are often used to justify the move toward more-automated technologically-driven forms of policing, the contextual parameters of crime are lost when decisions are automated and human discretion is removed (Hartzog et al., 2015;Kotsoglou and Oswald, 2020). As Joh (2015: 32) notes, in our attempts to use technology to "eliminate some bad uses of police discretion (such as racial bias), it has the potential to dampen the power of good police discretion" such as giving a "person a break on behavior which would otherwise warrant a summons, a citation, or an arrest". Police discretion can aid in building communitypolice relations and help contextualize people's behavior within local norms and culture, which is something that an over-reliance on technologically-automated policing would seem to miss. The risk here is that police end up simply encoding and reproducing already existing biases and discriminatory and racist practices by relying too heavily on automated policing systems like FRT (Hood, 2020).

Violations of privacy and data (mis)use
FRT has been categorized as a form of "silent technology" (Introna and Wood, 2004). Two of the defining features of a silent technology include hidden/embedded implementation and passive operation. The hidden/embedded implementation of FRT describes its ability to be intertwined with existing digital technologies such as body-worn cameras or CCTV. Furthermore, FRT is considered a passive operation because it does not require consent or knowledge of its use by the data subjects. Introna and Wood (2004) argue that the very designs of silent technologies are inherently political because they include certain interests while excluding others, for instance by way of the composition of the training sets used by particular FRTs. The "silent" nature of FRT therefore poses several risks to privacy and raises questions about how the data collected could be (mis)used by the police (Nesterova, 2020).
Even where FRT is overtly deployed, it raises concerns in comparison with other biometric technologies because the technology can connect a person's face to his or her identity in public settings, where people have few options to avoid surveillance (de Andrade et al., 2013). The level of intrusiveness is dependent on a range of factors related to how the technology is (mis)used (Mann and Smith, 2017). For example, there is often less concern about privacy when FRT is used for a limited purpose such as confirming a person's identity (e.g. at a border crossing) (Slane, 2021). FRT becomes increasingly problematic the more it is used for suspect identification in public settings or when the face database is gleaned from public sources. As the England and Wales Court of Appeal (Civil Division) ruled in a case challenging police use of FRT, the ongoing live capture and scanning of faces on public streets infringes privacy and equality rights under European Union law (Bridges v. SWP, 2020). Similarly, when FRT compares a suspect's image with a large database of existing images, whether that database is culled from online sources (e.g. Clearview AI) or government photos (e.g. driver's licenses), the privacy interests of individuals are implicated when their image ends up in a police database without their consent (Mann and Smith, 2017;OPC, 2021;Slane, 2021). Similarly, privacy concerns emerge when FRT is used to track people across public cameras and hence through public spaces (Mann and Smith, 2017;Ringrose, 2019). The risks of FRT are amplified when we consider these privacy concerns in conjunction with the previously discussed bias/discrimination experienced by marginalized groups (Hood, 2020).

Lack of oversight and limited transparency
Many of the above discussed risks pertain to existing issues that need to be resolved, but some are potential issues. Regardless, these risks often arise because of a lack of oversight regarding FRT procurement and use and/or limited transparency by police. Frequently, it is unclear how police are using FRT, what happens to the data, and whether the data are secure (Carter, 2018;Ringrose, 2019). In the United States, it is estimated that approximately 117 million people, approximately half the adult population, are included in databases that the police can use with FRT, and this number will likely only increase if left unchecked (Garvie et al., 2016). More broadly, the sheer volume of information about people's lives that is now stored in digitized form raises important concerns about how police access, use, store, and search this information (Fan, 2018;Mann and Smith, 2017;Slobogin, 2017).
Despite claims of FRT being automated, there is still much discretion and human oversight throughout its design and implementation (Dessimoz and Champod, 2016;Fussey et al., 2021;Kotsoglou and Oswald, 2020). As such, properly framing and limiting appropriate uses of FRT and procedures for preventing over-reliance and mitigating bias are key (Nesterova, 2020). Few guidelines have been provided to, or developed by, police for how to implement FRT in a rights-protective manner, leading to the above-noted controversies. In particular, a privacyprotective framework needs to be developed that can balance public safety while at the same time protecting people's data and privacy (Nesterova, 2020;Slane, 2021). In addition, although the specifics vary by location, a host of legal questions need to be addressed, including when a warrant is required and what criteria need to be met (Ferguson, 2021;Slane, 2021) and how to ensure due process when people's facial images are digitally ubiquitous (Introna and Wood, 2004). As Galterio et al. (2018: 10) argue "the growth of this technology is completely dependent on the security measures in place to ensure that privacy is protected and accuracy is exact".
Yet, until the recent upswing in public controversy over police use of FRT, there was little evidence that FRT use by police services was being scrutinized. For example, Garvie et al. (2016) found little evidence that FRT systems used by police in the United States had ever been audited and thus it was unclear what (mis)use had taken place. More recently, in Canada, the OPC investigated the Royal Canadian Mounted Police's (RCMP) use of Clearview AI and found that the RCMP had violated Canada's privacy laws because "billions of people essentially found themselves in a '24/7' police line-up" (OPC, 2021). Other countries such as Australia, Germany, and the United Kingdom (UK), are scrutinizing FRT more closely to determine what sorts of oversight mechanisms (e.g. audits) might be required (Mann and Smith, 2017). Further, the European Parliament is moving toward banning FRT use in public spaces (European Parliament, 2021).
As discussed above, there has been much focus on FRT's inaccuracy and dubious effectiveness, raising the question of what thresholds or policies are or should be in place to determine what counts as "accurate" or "effective" enough to justify use of the technology. Similarly, as Fussey et al. (2021: 332) note, the preoccupation "with outcomes and 'if it works' … fails to define what appropriate identifications of success should be (i.e. the number of convictions, arrests or accurate identifications or minimizing the volume of inaccurate 'matches')". In addition to disclosing accuracy rates and levels of effectiveness when cases proceed to court, it must also be made apparent how FRT was used to identify the suspect and that FRT has much lower thresholds of accuracy than other more familiar biometric technology (e.g. DNA, fingerprints) (Dessimoz and Champod, 2016;Kotsoglou and Oswald, 2020).
Relatedly, transparency in how police are using FRT is lacking. For example, Fussey et al. (2021) criticized the reference database being used to compare real-time surveillance images in the UK as overly discretionary and vague, because it included not only people wanted for crimes, but also missing people and "persons of interest". Thus, there was no clear and transparent criteria that led to people being included in this database. The composition of the database directly links to the accuracy and effectiveness of FRT, the privacy of persons not under investigation, as well as the bias and discrimination experienced by marginalized groups. If the database is assembled dragnet style, for instance by indiscriminately scraping digital images from the internet, claims that privacy and data are protected ring hollow (de Andrade et al., 2013;Rezende, 2020). Overall, the lack of oversight and transparency on police use of FRT has hampered efforts to understand avenues in which FRT might be legitimately used (Hood, 2020).

Incorporating public voices into police FRT usage
Given the potential benefits and range of risks/concerns identified above, how FRT should be regulated and what policies need to be put in place remains an open question. Some cities (e.g. San Francisco, CA; Portland, OR) have responded to police FRT use by banning the technology's use in public spaces. Companies such as Amazon, IBM, and Microsoft have placed moratoriums on selling FRT to police services (Crawford, 2019). Legislation and by-laws have been proposed and, in some cases, enacted, and guidelines for using and keeping track of use of FRT by police services are developing and are at various stages of implementation (Feiner and Palmer, 2021;OPC, 2021). What is currently missing from all these approaches to managing police FRT usage is how the public understands FRT (and related technologies) used by police and more broadly how to accomplish genuinely incorporating public input into policies for controversial technologies. In this section, we demonstrate the importance of the public having a voice in how police acquire and use investigative technologies that impact their rights. In doing so, we delineate the broad contours of the processes necessary to genuinely incorporate public voices into the development of FRT policies. Therefore, in what follows, we are more interested in articulating how FRT policies could be co-constructed with the public rather than delineating what a policy should look like, which would inevitably vary by locale and legal context.
In recent decades, there has been an increasing interest in genuinely engaging the public in policy-making. For example, the desire to increase public engagement is seen across a range of topics including health (Boswell et al., 2015;Meetoo, 2013), smart cities (Boukhris et al., 2016;Cardullo and Kitchin, 2019), policing (Mangan et al., 2018), and robotics (Wilkinson et al., 2011). There is an acknowledgment here that the public have important insights that need to be understood and incorporated into policy-making, even if topics are controversial and complex.
This attempt to engage the public is largely due to the perceived apathy of the populace toward participating in decision-making (e.g. voting, attending public meetings) on issues impacting their lives (Epstein et al., 2014;Hendriks and Kay, 2019;MacMillan, 2010;Thacher, 2001). However, public indifference to policy-making can be attributed, in part, to the methods that historically have been used by policy-makers to engage the public (Thacher, 2001;Woodford and Preston, 2013). Methods billed as "public consultation" have often entailed no meaningful dialogue between the public and policy-makers, a lack of representativeness, and an inability to impact policy outcomes that were already predetermined (Cook, 2002;Rowe and Frewer, 2005;Woodford and Preston, 2013). In essence, public consultation has often been tokenistic (Cook, 2002).
For the purposes of this article, we use the term "public engagement" instead of "public consultation" in an attempt to move the literature beyond consulting the public and toward genuinely having the public participate in policymaking. More specifically, by public engagement we mean having the public actively co-construct policy through a two-way dialogue with policy-makers on decisions that impact their lives, which in this article refers to the public having a genuine voice in how FRT is used by police. However, it should be noted that public engagement is a contested term in the literature (Dean, 2017), and we use it to encompass similar terms such as co-design or participatory design (Whicher and Crick, 2019), public participation (Meetoo, 2013), co-production (Glaser et al., 2006), co-creation or citizen-centric (Cardullo and Kitchin, 2019), active citizenship or inclusion (Bartoletti and Faccioli, 2016), and collaboration (Mangan et al., 2018). Despite differences in terminology, the overarching consensus is that public engagement in policy-making should be deliberative, iterative, dynamic, flexible, impactful, and not predetermined (Horlick-Jones et al., 2007;MacMillan, 2010;Mangan et al., 2018;Woodford and Preston, 2013).
The benefits of a public engagement approach over public consultation include the possibility of improving democratic functioning (Fung, 2006;Thacher, 2001), encouraging a greater range of perspectives in policymaking (Horlick-Jones et al., 2007;Meetoo, 2013), and enhancing policy frameworks and understandings (Hendriks and Kay, 2019). Yet, obtaining these benefits remains challenging and a variety of mechanisms have been deployed to help accomplish public engagement. These mechanisms include participatory budgeting (Lukensmeyer, 2017), citizen panels (Vob and Amelung, 2016), citizen juries (Boswell et al., 2015), citizen assemblies (Devaney et al., 2020), and a range of other variations. In part, the range of public engagement terminologies and mechanisms has led to much experimentation in ways to engage the public but little agreement on how to best accomplish effective genuine public engagement (Dean, 2019;Glaser et al., 2006;PytlikZillig and Tomkins, 2011;Rowe and Frewer, 2005). In what follows, we examine some of these public engagement mechanisms and focus on the most promising elements for effectively engaging the public in developing policies for police use of FRT.

Education, expertise, and transparency
For FRT to be effectively and acceptably used, public sentiment on the topic should be included as a tool to inform the process of police decision-making (Steinacker et al., 2020). Conducting public surveys is a common method for determining public perceptions of FRT. For example, a survey of Canadians found that approximately 69% felt that the police should be able to use FRT software, and 48% believed that some privacy was worth losing if the use of FRT significantly reduces crime (i.e. by 5%) (Cybersecure Policy Exchange, 2021). Similarly, Bromberg et al. (2020) found that in the United States, most people surveyed supported police use of FRT, but this differed by demographics and political orientation. In addition, Bradford et al.'s UK-based study (2020) found that individuals who placed trust and legitimacy in the police were less likely to have privacy concerns about police use of FRT. Although researchers have only begun to understand what influences people's perceptions of FRT, levels of support for FRT use by police tend to vary across countries and communities (Kostka et al., 2021).
However, it is not always clear in these studies whether the public being surveyed have a clear grasp of what FRT entails, especially in the hands of police. As Bradford et al. (2020) note, given that people know little about or have few experiences with FRT, they make sense of it by extrapolating from existing knowledge and more familiar topics (e.g. perceived trust and legitimacy in police more generally). Thus, using surveys to capture people's understandings of disruptive technologies is useful in determining the lay public's raw perspective on technologies, but lacks the level of informed opinion needed to develop effective and acceptable public policy. Given this, the public engagement literature suggests that effective public participation in policy-making requires an educational component that allows participants to have a similar information base from which to discuss the policy under consideration (Dean, 2017;Fung, 2006;Hindmarsh, 2010;Muradova et al., 2020). Thus, before the public can genuinely contribute to policy-making, they need access to independent expert knowledge on FRT and should be provided with neutral information on its uses, risks, and benefits, as well as the range of procedural and governance mechanisms to oversee its appropriate use.
The multifaceted and rapid pace of social and technological changes make it inevitable that expert knowledge is needed to understand contemporary issues. Expertise is often critical in assessing the risks involved in technological advances (Beck, 2009;Giddens, 1991). However, the public needs to trust that the information they are receiving from experts is impartial or at a minimum articulating the best available (e.g. accurate, timely, carefully considered) knowledge on a topic (Beck, 1992;Giddens, 1990). The complexity of FRT makes it unlikely that the public have fully informed, well thought out pre-existing perspectives on FRT. Thus, the best source of knowledge for the public to draw from to form the necessary information base is likely to come from experts. In particular, experts can help the public delineate FRT from related technologies (e.g. big data, predictive policing) and to contextualize FRT use within policing more generally.
The second key element for the public to genuinely participate in policy-making is ensuring that neutral and accurate information is publicly available regarding current and potential police use of FRT specific to their jurisdiction. The police are notoriously secretive organizations and sometimes it is even difficult to confirm that a technology is being used by a police service at all (Introna and Wood, 2004;Joh, 2015). Thus, while experts can provide insight into various aspects of FRT usage by police more generally and in a (mostly) neutral manner, the police need to be much more forthcoming regarding their specific use of FRT and other related surveillance technologies.
In order for the public to contribute genuinely to FRT policy-making, they would need information on the acquisition process for new investigative technologies (e.g. cost, third-party companies involved, bidding process, other technologies/approaches considered); means of auditing data collection, use, and secure data storage; effectiveness (e.g. accuracy of FRT, empirical evidence); in what circumstances and with what authorization FRT can be used (e.g. when is a warrant required and on what standard); and ongoing oversight plans for FRT usage (e.g. planned independent audits, public reporting of results, ongoing research on effectiveness). Without access to this type of information, it would be difficult for the public to develop an informed opinion on an appropriate means to govern FRT usage by the police.

Dialogue and co-creating policies
Whereas the previous section focused on information acquisition and availability as a passive process (i.e. information is provided and simply acquired by a recipient), in this section we discuss the co-creation of knowledge as an active process. For the public to make use of the information about FRT provided to them and to reconcile this information with their previous experiences and knowledge, it is important that the public engage with this information in a more active manner. Regarding FRT, opportunities for dialogue should be developed between community members and experts, police, and those attempting to develop the policy, but also between and among members of diverse communities (Ansell and Gash, 2007;Cook, 2002;Fung, 2006;Hindmarsh, 2010;Meetoo, 2013;Rowe and Watermeyer, 2018;Woodford and Preston, 2013).
Combining educational components, expert knowledge, dialogue, and public input is theorized to produce a deliberative democratic approach that enhances mutual learning among those involved in the policy-making process (i.e. policy designers, stakeholders, and communities), helps clarify ambiguities in policy, and progresses workable solutions to complex policy problems (Fung, 2006;Lukensmeyer, 2017;Muradova et al., 2020). The group brought together to dialogue must be a representative sample of the public and include those directly impacted by the policy. When a diverse set of people is brought together to discuss issues, including those with opposing views, there is an opportunity to have productive dialogue and collectively solve problems (Ansell and Gash, 2007).
Although there are multiple ways to practically accomplish co-creation of public policy that incorporates the above discussed principles, one of the more promising approaches has been through citizen assemblies. When optimally operating, citizen assemblies combine the key elements discussed above and appear to genuinely incorporate public voices into policy-making. For example, Ireland's Citizens' Assembly was a randomly selected and representative sample of Ireland's population. This group of people was brought together to deliberate some of the most pressing and complex issues facing Ireland (e.g. climate change, access to abortion). Ninety-nine citizens participated in the process, which entailed regular meetings over a series of months. Key aspects of the meetings included listening to information presented by experts and those with personal experiences on a particular topic, having an opportunity to dialogue on the issue as a group, and voting on policy recommendations to gain consensus. The general public were also allowed to participate through submissions to the Citizens' Assembly. The Citizens' Assembly voted 64% in favor of legalizing abortion, which in Ireland has been a divisive issue. The recommendations identified by the Citizens' Assembly sparked a broader public referendum brought forward by the government. Subsequently, the broader public referendum resulted in 66% voting in favor of abortion legalization. Overall, this deliberative democratic approach was able to include a representative group of citizen voices in influencing the direction of policy-making and helped to make progress on issues that the political system alone had failed to address (Citizens' Assembly, 2021; Devaney et al., 2020).
These types of approaches, whether citizen assemblies or some other form of collaborative learning design used to co-create public policy, would be an important model to use in developing FRT policies, especially considering the controversial nature of the technology. Although the police have readily embraced public consultation through community policing, they have been reluctant to embrace a public engagement approach that co-constructs policy with the public. Often, segments of the public are ignored by police in their consultations if they hold negative views of the police (Cook, 2002). Through a process of meaningful public engagement that genuinely incorporates public voices in a democratic fashion, it is proposed that public trust can be improved. We are not suggesting that the process will generate a consensus, because some communities may not want to participate, and some will maintain their predisposed views. However, the process remains vital to improving trust in police regardless of the outcome.
Overall, incorporating collaborative learning and engagement to public/police dialogue can be a promising avenue for instilling transparency and openness. For instance, when examining public commentary on police use of FRT, Bragias et al. (2021) found that support for FRT would likely increase if police were more transparent and worked to educate the public on their properly limited and justified uses of FRT. These types of techniques (e.g. openness to critique and dialogue, willingness to educate the public, incorporating public feedback into policy design) go a long way toward enhancing trust and legitimacy, especially in a time of increasing public criticism of police tactics (Bradford et al., 2020;Bragias et al., 2021).

Future research
Co-creating policies on FRT with the public can improve our understanding of how to engage the public in devising policies to govern police use of data-collecting technologies. Questions about FRT use are often applicable to other investigative technologies, such as how, when, and where should the technology be used by police or other powerful entities (e.g. corporations)? For example, under what circumstances do members of the public feel that police surveillance crosses a line into unacceptable intrusion? What level of effectiveness in reducing or solving crime is required before the public is accepting of police use of an investigative technology like FRT? In addition, what trade-offs (e.g. privacy, risk of false arrests) do they deem acceptable in exchange for this effectiveness (e.g. reduced crime)? How do members of the public think about FRT in relation to other related surveillance and data-informed technologies (e.g. police use of big data)? What is it about FRT specifically that has recently sparked so much concern when police regularly have access to other forms of data that are similarly compromising (e.g. social media, CCTV, license plate readers)? Finally, what are the best avenues for engaging the public on decisions that affect them? We suggest that deploying a range of methodological approaches that co-construct policy with the public can help to gain a better understanding of effective techniques for public engagement and help advance theory in this area, which is currently lacking (PytlikZillig and Tomkins, 2011).

Conclusion
Obtaining public perspectives on policy issues identifies a wider range of concerns/benefits than are envisioned by experts alone (Horlick-Jones et al., 2007;Powell and Colin, 2009). However, as we have argued in this article, in order for the public to genuinely participate in public policy-making, they need to have a sufficient information base from which to form their opinions and they must be able to dialogue and discuss these ideas with others. Although the move toward evidence-based policing has added a much-needed focus on what works (i.e. effective policing techniques) (Manning et al., 2013), the question of whether FRT is effective at improving the efficiency of investigations and solving cases is moot unless the governance of FRT use is broadly deemed to be legitimate by the public, including communities with endemic distrust of police. In addition to FRT, the approach outlined in this article could provide a model from which to examine other controversial technological innovations and their uses (e.g. artificial intelligence, big data, predictive policing). The public have concerns about the pace and scope of technological change and innovations in policing (Wilkinson et al., 2011) and effective public engagement could help address, if not quell some of these concerns.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.