IFed: A novel federated learning framework for local differential privacy in Power Internet of Things

Nowadays, wireless sensor network technology is being increasingly popular which is applied to a wide range of Internet of Things. Especially, Power Internet of Things is an important and rapidly growing section in Internet of Thing systems, which benefited from the application of wireless sensor networks to achieve fine-grained information collection. Meanwhile, the privacy risk is gradually exposed, which is the widespread concern for electricity power consumers. Non-intrusive load monitoring, in particular, is a technique to recover state of appliances from only the energy consumption data, which enables adversary inferring the behavior privacy of residents. There can be no doubt that applying local differential privacy to achieve privacy preserving in the local setting is more trustworthy than centralized approach for electricity customers. Although it is hard to control the risk and achieve the trade-off between privacy and utility by traditional local differential privacy obfuscation mechanisms, some existing obfuscation mechanisms based on artificial intelligence, called advanced obfuscation mechanisms, can achieve it. However, the large computing resource consumption to train the machine learning model is not affordable for most Power Internet of Thing terminal. In this article, to solve this problem, IFed was proposed—a novel federated learning framework that let electric provider who normally is adequate in computing resources to help Power Internet of Thing users. First, the optimized framework was proposed in which the trade-off between local differential privacy, data utility, and resource consumption was incorporated. Concurrently, the following problem of privacy preserving on the machine learning model transport between electricity provider and customers was noted and resolved. Last, users were categorized based on different levels of privacy requirements, and stronger privacy guarantee was provided for sensitive users. The formal local differential privacy analysis and the experiments demonstrated that IFed can fulfill the privacy requirements for Power Internet of Thing users

Introduction meters and sensors for fine-grained information collection. Wireless sensor networks (WSNs) are promising option on Power IoT systems, which are provided with low cost and large geographic area coverage. 4 As shown in Figure 1, besides the strong electric power transmission channel, in home area networks (HANs) and neighbor area networks (NANs), power grid system needs a broad coverage of data transmission channel in which WSNs are applicable.
IoT technologies have the capability to collect, quantify, and understand the surrounding environment, which brings many benefits to IoT users. However, the extensive users' data collection and processing from IoT device, such as smart meter, also bring some privacy concerns. 5,6 As the IoT end-devices can be deeply involved in users' private data, the data generated by them will contain privacy-sensitive information. 7,8,9 Collected data in the IoT device may leak and threaten smart gird users' behavior privacy. For example, by applying non-intrusive load monitoring (NILM) 10 techniques, power consumption data could infer users' privacy. In Figure 2, it is possible to infer that when the fan heater, stove burner, or other electric appliances are in use, 6 this may infer the detailed behavior of the residents, even might infer the identity or personal privacy of the residents continually. These will be collectively referred to as the behavior privacy roughly, 11 which will be mainly exposed in this article.
Differential privacy (DP), which has been used successfully in various fields, 12,13 provided a formalization of the notion of a privacy adversary, the introduction to a meaningful measure of privacy loss. 14 In traditionally centralized DP, privacy is guaranteed by adding obfuscation to output of trusted data aggregator. 15,16,17 However, Power IoT networks are the network systems which consist of massive smart meters, sensors, or other IoT devices, embedded in the physical world widespread with weak network boundary. 18,19 Adversary may conceal from anywhere on the smart grid user zone. There are many potential attacks, such as NILM, that can reveal the private data before they reach the trusted data curator. 20,21 Furthermore, electric power grid users might be sensitive military industrial enterprises, and electric power provider cannot be regarded as trusted third party, moreover the widely uncontrollable channels. 22 In summary, in Power IoTs, the support of trusted data curator is inadequate, therefore local obfuscation is a better choice for users' behavior privacy preserving in Power IoTs.
Local differential privacy (LDP) 23 has been used in privacy preservation of smart grid and IoT in recent years. LDP avoids collecting exact original power consumption information and substitutes it with locally disturbed data onto user side, thus providing a stronger assurance to the users. Unfortunately, because of the robustness of NILM, traditional obfuscation mechanisms, such as randomized rescannot reduce the accuracy of behavior inference observably. However, for accurate billing, energy consumption data are high accuracy sensitive and excessive obfuscation is not acceptable to customers and electricity provider. It is deemed hard to achieve the trade-off between user behavior privacy and utility of energy consumption data for ordinary obfuscation mechanisms. Many researches had indicated that naive hardware implementation of LDP mechanism cannot guarantee behavior privacy of individual user in Power IoT systems. 20 Some advanced obfuscation mechanisms 24-28 combined with distribution estimation or machine learning are able to achieve the trade-off; however, the following problem is that the naive Power IoT device cannot endure the complex and high computational cost algorithms, especially the procedure of model training that high-computing resource consumption. In summary, Power IoTs with naive hardware cannot guarantee behavior privacy completely through directly applying existing local obfuscation mechanisms. 20 To solve the behavior privacy in Power IoTs, we proposed IFed, a novel federated learning framework of IoT. As an extension of federated learning, 16,29,30 IFed uses model transport instead of sensitive data transport to privacy preserving. The main focus of this article is to derive insights into the trade-off between behavior privacy of power internet of things users and utility of energy consumption data, more importantly, grapple with the problem of how does complex advanced obfuscation mechanisms suitable for naive Power IoT user side with very low computation capacity. We aim to achieve a good trade-off between users' behavior privacy against NILM, data utility, and low computational cost at Power IoT device. Besides, we strictly and formally define and prove its privacy protection strength.
The contributions of the article are listed as follows: To the best of our knowledge, this is the first work that federated learning framework for the IoT systems, which could solve the users' behavior privacy problem against NILM technology and achieve the trade-off between privacy and utility; Especially, our solution solves the behavior privacy problem of not only the process of energy consumption data uploads but also the consequent process of model training that is likely to be a potential behavior privacy risk in the procedure that models uploads and downloads between users and electric provider; Considering existing conditions in smart grid that many sensitive users and regular users exist together, our solution supports different privacy requirements of users and different obfuscation mechanisms.
The remainder of this article is structured as follows. Section ''Preliminaries'' provides background on NILM, LDP, and obfuscation mechanisms. Section ''System model'' proposes the overview of IFed. Section ''Key algorithms in IFed'' discusses the three key algorithms used in IFed in detail-model aggregation, horizontal federated learning (HFL), and heterogeneous federated transfer learning (HFTL). Section ''LDP analysis'' explains the formal analysis by LDP. Section ''Experiments'' describes the evaluation and experimental results. Finally, section ''Conclusion'' gives the conclusion.

Preliminaries
In this section, the adversary model, LDP, and obfuscation mechanisms are introduced which are used for federated learning for LDP in IoTs.

Adversary model
Adversary may be hidden and snooped everywhere on the wide grid, even around the resident. The threat stems from attacker inferring users' behavior and action with high confidence from energy consumption observers. This technology called non-intrusive load monitoring was first introduced by Hart. 5 In recent years, some research works improved NILM using artificial intelligence algorithms and with measurements at higher accuracy. 31 In particular, some improved NILM algorithms have had favorable robustness, which can keep high accuracy under obfuscation. In the case of single load identification, it compares the extracted feature of unknown loads with those of known loads in the device database pool and tries to minimize the errors between them to find the closest match. 32 The problem of NILM can be formulated as follows: given the sequence of aggregate power consumption 31,32 X = X 1 ; X 2 ; . . . , X T from N appliances at the entry point It can be inferred that the fan heater was in use from 5th to 30th minute, the oven was in use from 35th to 55th minute, and so on. Because kitchen appliances were used in due order, the resident was cooking by himself or herself from 15th to 55th minute. From some similar behavior, it might be inferred that the resident is a single.
of the meter at t = 1; 2; . . . , T , the task of the NILM algorithm is to infer the power contribution y i t of appliance i 2 1; 2; . . . , T at time t, such that at any point in time t where sðtÞ represents any contribution from appliances not accounted for noise measurement. The object of NILM is class = arg i min kŷ i À y i k whereŷ i is the appliance feature available in the signature library and y i is the new feature extracted due to occurrence of an unknown event.

LDP
Traditional global differential privacy (GDP) 9 and LDP 23 are two approaches to achieve DP. Different from GDP, LDP needs no trusted data curator, 33 which is more applicable to behavior privacy in Power IoTs.
Recently, an increasing number of researchers had changed their focus to LDP in IoTs, since LDP can enable protecting each user's privacy locally without relying on a trusted third party. 34 LDP can be formulated as follows. where P denotes probability. Thus it can be seen that the obfuscation mechanism plays a key role in LDP; most of the existing obfuscation mechanisms are random and are based on randomized response such as RAPPOR. 23 The randomized response can be formulated as follows.
Randomized response. For each client's value v and bit i, 0 ł i ł k in B, create a binary reporting value B 0 i which equals to where f is a parameter controlling the level of longitudinal privacy guarantee. Then, send the generated report B 0 i to the server.

Advanced obfuscation mechanisms
We find that some mechanisms proposed recently are no longer completely randomly obfuscated but transformed first based on the state of the target before applying LDP rather than the traditional method; this kind of mechanism is more efficient to achieve the trade-off between utility and privacy in IoTs. They can be split into several categories which will be introduced in this section; however, we call them as advanced obfuscation mechanisms.
One is based on discrete distribution estimation, 24,25 where before randomized response, empirical estimation is carried out first as where m is the output of empirical estimation, and the empirical estimation of p is given bŷ Another one is based on machine learning such as Bayes, 35 cluster, 26 Markov, 27 and sparse coding. 28 Usually, this kind of mechanism first performs an algorithm before randomized response. For example, in sparse coding, a dictionary is trained by objective function The key problem of the advanced obfuscation mechanisms in Power IoTs, at the same is the major challenge of this article, is computing resources that are in great demand. Most of the Power IoTs, probably mart meters, are unrealistic.

Federated learning
The concept of federated learning 29,36 is proposed by Google recently to reduce the risk of the cloud service provider learning the personal model update. In federated learning, model is learned by multiple clients in decentralized fashion. 37,38 The parameters of trained model are centralized by a trusted curator, and then it distributes an aggregated model back to the clients. 30,39 In particular, the goal is typically to minimize the following objective function arg min where m is the number of clients, F k is the local objective function for the kth client, and p k specifies the relative impact of each client.

System model
In this section, we present the design rationale of the federated framework for LDP in Power IoTs.

Categorization of IoT user
Generally speaking, Power IoT is a wide network involving many sections such as HAN, NAN, and WAN, and the system structure is quite different between different regions around the world. Meanwhile, the requirements and investments of privacy preserving for different users are not completely the same, therefore the requirement of privacy protection for different users may be different.
According to the privacy of press and investment, we categorized IoT users as sensitive users and regular users. Our aim is to provide a stronger privacy guarantee for sensitive users, meanwhile making full use of the local edge computing equipment of sensitive users. Regular users, such as residents, usually seek a good privacy without high investment. Their behavior privacy disclosure may lead to personal privacy loss. We assume that such users do not want to send the original data that may lead to privacy disclosure directly; however, they can accept the use of models from grid and send local models to grid, as they may not be willing to add any devices except smart meters.
Sensitive users, such as military industry enterprises, may seek an absolute privacy with some acceptable investments. Their behavior privacy disclosure may lead to the disclosure of top secrets or great loss. We assume that such users do not trust other users in the grid, data transmission channel, or grid side. And sensitive users do not accept the un-obfuscated original energy consumption data sent to others for billing or for model training; however, they accept it by adding some local IoT devices with weak computing resources to support the privacy protection.
Due to the low computation capability of IoT devices, sensitive users have no capability to perform complicated computational tasks such as fully deep learning model training. However, they have the ability to perform some simple tasks such as model execute to prediction to HFTL. Furthermore, regular users only have the ability to perform some more simple tasks such as model execute to prediction to homogeneous transfer learning. 40

Overview of IFed
The aim at our framework is letting electricity provider help users to train model; however, it does not mean electricity provider had to be seen as a completely trusted third party and the users give up privacy protection locally. In fact, our goal is designing a system in such a way that the user can safely use trained model with the aid of incompletely trusted electricity provider, moreover defeat the adversary around the user. The expected result is that the adversary knows that the user transfers the consumption data with model or distribution estimation but they cannot recover it, meanwhile the data of high utility can be used in billing. Detailed goals are as follows: First, achieve the trade-off between behavior privacy against NILM, data utility-supported electricity bill, and low computing resource consumption in naive Power IoT terminal; Second, all models transport between users and electricity provider subjected to LDP, which will not be a new privacy risk; Note that the proposed federated learning for IoTs does not intend to replace the existing advanced obfuscation mechanisms but aims to solve the problem that power consumption data and trained model safely interact between individuals and grid side.
The architecture of the federated learning system in Power IoTs is shown in Figure 3. In this system, r regular users with similar data structure and u sensitive users with different data structure and different electric appliances coexist on same Power IoTs. To solve the problem caused by insufficient resources on users, electricity provider pre-trained a global model w g or tentatively undertake others' task to substitute for users k and u individually. Then, the models w r 1 ; w r 2 ; . . . , w r t and w u 1 ; w u 2 ; . . . , w u n were fine tuned with transfer learning by local cached historical data as a new local model, and the power consummation data were obfuscated. In order to maintain stronger applicability and generality of global model, regular user uploads local model with LDP to electricity provider, by contrast, sensitive users do not participate in it for more absolute privacy guarantee. Then, the electric provider aggregates the model w r 1 ; w r 2 ; . . . , w r t for a new global model w g 1 . Sensitive user needs federated transfer learning technology to implicate global model to local model. The detailed process of such a system usually contains the following five steps: Step 1. Power provider send current pre-trained global model w g 0 to all sensitive users u and regular users r.
Step 2. Regular users r 1 ; r 2 ; . . . , r t update local models w r 1 ; w r 2 ; . . . , w r t with global model w g 0 and individual local cached data by HFL (see section ''HFL for regular users'').
Step 3. Sensitive users u 1 ; u 2 ; . . . , u n update local models w u 1 ; w u 2 ; . . . , w u t with global model w g 0 and individual local cached data by HFTL (see section ''HFTL for sensitive users'').
Step 4. Only regular users r 1 ; r 2 ; . . . , r t generate the new local models obfuscated as w 0 1 r; w 0 2 r; . . . , w 0 t r and then upload it to power provider.
Step 5. Power provider selects some regular users as active users a 1 ; a 2 ; . . . , a t and generate a new global model by model aggregation (see section ''Model aggregation for provider'').

Key algorithms in IFed
In this section, the details of several key technology designs of this work are presented.

Model aggregation for provider
First, in this section, we introduce how the electricity provider learns a shared model by aggregating locally computed updates. As traditional federated learning, we select a C-fraction, which controls the global batch size, and a fixed learning rate h. Let w g t denote the global model and w g t + 1 denote the next round global model to learn. Then, the electricity provider aggregates these gradients and applies the update w g t as Typically, we take f i ðwÞ = lðx i ; y i ; wÞ as the loss of the prediction on example ðx i ; y i Þ under model w, and in this work, we adopt it as cross-entropy loss. Because the number of regular users may be very large, we cannot update the global model with all their local model, which is inefficient and not necessary. Optimizing it as random select C-fraction in all regular users, we have where C is the C-fraction and R is the number of regular users.
In summary, our model aggregation is similar to federated stochastic gradient descent (FedSGD); however, we can apply it to not only the algorithms based on neural network but also other advanced mechanisms such as those based on Markov or sparse coding like The green circular column is the regular user and the orange circular column is the sensitive user. They cannot communicate with each other. The green circular column is the electricity provider.
We will introduce this in detail in the following section. Then, w t + 1 P K k = 1 n k =n w k t + 1 . In summary, the work for regular user is given by Algorithm 1.

HFL for regular users
Consider that regular users R, in most cases, are common residents or enterprises with similar live or business behavior, therefore we assume the feature spaces are the same where X denotes the feature space and Y denotes the label space.
However, every user, such as smart meter, is unique. Therefore, data held by each data owner i and their space denoted by I are Learning procedure. In this section, we will introduce the learning procedure of regular users, specially, how to generate local model from a global model and local cached data. First, the regular smart meter downloads the global model from electric power provider and overwrites local parameters. Then, he runs one epoch of SGD training on his local dataset when using obfuscation mechanism based on neural network. He runs one epoch dictionary update in sparse coding or similar performance in other obfuscation mechanisms. Third, all smart meters i, i 2 R, compute Dw i that reflect how much each parameter has to change to more accurately model the local dataset of the ith smart meter Note that performing one epoch of SGD training or one round of dictionary update is a much lower than full training. It is a simple task, which smart meter can complete.
Objective function. While we focus on regular users' objectives, the algorithm we consider is applicable to minimize the loss between local cached data and local parameters.
In neural network, the objective function is where lða; bÞ denotes the loss function between a and b, in this work we adopt it as cross-entropy loss; and Y denotes all the parameters to be learned. In sparse coding, the objective function is arg min u Y L = x u À B 1 ::: B n ½ Obfuscation and upload. To avoid recovering the privacy data onto local model by adversary, regular users could choose to obfuscate the local model before uploading it. One approach to obfuscation is that all regular users could use a fixed Gaussian mechanism like N ð0; s 2 Þ where N is a standard normal distribution and s is its variance. Regular smart meter uploads the local model as Another approach is that each regular user uses a different Gaussian mechanism according to his model as In summary, all work for regular user is given in Algorithm 2.

HFTL for sensitive users
Now we consider that regular users S, a more complex case, are common sensitive electric power customers, who have got some specialty electric appliances which nobody else has received. The state of these appliances maybe the major concern of behavior privacy protection. Algorithm 1. Federated learning for local differential privacy algorithm. The R regular users are randomly selected by C-fraction and h is the learning rate.
[Grid Side Execution]: 1 initialize w 0 Input:u is sensitive users, r is regular users, and l is the number of smart meter random select for activation 2 for each round \t = 0; 1; :::. do for each regular activated user Kdo 5 w k t + 1 w t À hg k 6 end for Therefore, in this case, the feature spaces and label spaces are not same where X denotes the feature space and Y denotes the label space.
Obviously, the user spaces are not same Learning procedure. To the best of our knowledge, it is a complex task for existing HFLor vertical federated learning, because the user spaces, feature spaces, and label spaces are different. In order to solve the distribution difference, we do HFTL as follows. First, the regular smart meter downloads the global model and other hyper-parameters such as feature space X g and label space Y g from electric power provider. Note that the sensitive parameters of user space I g are not included. Second, the following simple objective function can be used to find a mapping for feature extraction where F is a mapping function, DIST ðÞ is a distribution difference metric, OðFÞ is a regularizer controlling the complexity of F, and VAR is the variance of instances. Third, he runs one epoch of SGD training in his local dataset when using obfuscation mechanism. Last, sensitive smart meter i, i 2 S, computes Dw i Note that performing the procedure with feature mapping is complex than regular users; however, for sensitive user with simple IoT devices, it is computationally acceptable. The Gaussian mechanism calibrated to the functions dataset sensitivity S f , therefore the Gaussian noise is defined as N ð0; s 2 S f Þ. Then, the regular users send f ðdÞ + N ð0; s 2 S f Þ to the electric provider.
Let w g 0 denote the initial global model, then the objective function is arg min Y L = X n i = 1 lðy i ; w g 0 ðx i ÞÞ where lða; bÞ denotes the loss function between a and b, in this work we adopt it as cross-entropy loss; and Y denotes all the parameters to be learned, for example, the weight and bias in neural network, transition matrix and emission matrix in Markov model, or dictionary in sparse coding.
Objective function. The objective function of sensitive users is similar to that of regular users we have introduced in the previous section. To avoid repetition, the simple conclusion is given as follows.
In neural network, the objective function is In sparse coding, the objective function is Algorithm 2. Federated learning for regular smart grid or IoT users. R is the number of regular users (smart meter), the R regular users are randomly selected by C-fraction, K is the number of smart meter random select for activation, and h is the learning rate.
Input: pre-trained model w g t from the grid, a small amount of local history data D rðtÞ K = CR [Regular User Execution]: Tuning the local model: for \i = 1; 2; :::. do 4 for b 2 B k do 5 w k t w k g À hrf k ðw k g ; bÞ Upload the local model each 24 h: 6 for \t = 0; 1; . . . , T G À 1. do: for each regular smart meter, do: Last, Algorithm 3 is given as follows:

LDP analysis
In this section, we adopt the DP method to analyze the privacy of IFed. The privacy of energy consumption data depends on specific obfuscation mechanism and not on the framework. We only discuss potential privacy disclosure risk due to users uploading their models applied to this framework. First, we consider the privacy disclosure risk of sensitive users, which is different from conventional federated learning. Sensitive users do not upload their local model to anyone in this work, so that existing behavior inference works. Then, it can be considered that the sensitive users are completely secure. Next, in this section, we will discuss how risk exists on regular users.
Second, we explore the simplest case in which all regular users r 1 ; r 2 ; . . . , r t add same standard normal noise to their model w r 1 ; w r 2 ; . . . , w r t and generate new models w 0 1 r; w 0 2 r; . . . , w 0 t r. Theorem 1: Privacy loss for a given user. The existing regular users r 1 ; r 2 ; . . . , r t , add a given fixed noise N ð0; s 2 Þ to each local model, and then it is subjected to ðe r t ; d r t Þ, where d r t = d r 1 = d r 2 = . . . This is the privacy loss for a given user r t , if the obfuscated model w 0 t r was snooped by adversary.
Proof. Let the sensitivity of Df of w be Df = max f ðw r t Þ À f ðw 0 t rÞ \1, where w is adjacent to w 0 , and then the Gaussian mechanism f ðw r t Þ + N ð0; s 2 Þ offers ðe g ; d g Þ-DP, where Therefore, l ł s 2 logð1=qsÞ when e = c 1 q 2 and d = c 2 ðq ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi logð1=dÞ=e p Þ. The trade-off between data utility and privacy is mainly completed by advanced obfuscation mechanisms, and the detailed proves are seen. 24,[26][27][28] In this article, we only discuss the privacy loss and privacy bound caused by IFed frameworks.

Experiments
In this section, we conduct experiments about IFed.

Experimental setup
Dataset and baseline algorithm. We use the reference energy disaggregation dataset (REDD) 42 verification. It is a dataset containing the detailed power usage information which consists of whole home and circuit/ device specific from six homes. Each home recorded the electricity consumption for a month and include the following: (1) the whole-home electricity signal at a high frequency (15 kHz) and (2) up to 24 individual circuits in the home, each labeled with its category of appliance or appliances, recorded at 0.5 Hz (plug-level monitors are recorded at 1 Hz).
We use SCRAPPOR 20 and FHMM 27 as baseline algorithm for IFed, which is already introduced in section ''Preliminaries.'' The parameters of SCRAPPOR are as follows: 300 signals as a batch that is the size of 15 min and 192 batches as a 300,192 matrix. Training iterations are 10,000. The parameters of FHMM are as follows: 300 signals as a batch that is the size of 15 min and max applicances as 10. In our experience, after 10,000 iterations, train takes about 50 min and the sparsity of dictionary achieves 99%.
Metrics. F1-score is a composition of four metrics employed in this article to evaluate IFed:

Experimental results
We studied the effects of different algorithms and frameworks against NILM and compared the F1-score.
Our experiments show that IFed provides a very good support for the advanced algorithms that achieve DP better in Power IoTs.
As shown in Table 1, we add noise to each of the aforementioned algorithms to generate the obfuscated data and then calculate the F1-scores of fridge, light, and washer dryer with Kim et al., 43 algorithm. Thus, by adding noise to the energy consumption signature, IFed has a better privacy-preserving level. In Figure 4, IFed shows the advantages of advanced obfuscation algorithms in DP in Power IoTs.
From an intuitive point of view, to achieve the same level of privacy protection, our framework with some advanced obfuscation mechanisms adds less noise than transitional schemes. Moreover, by adopting to our scheme, the computing burden of IoT terminal can be greatly reduced. In the following section, we will verify these two points by experiments.
The trade-off between privacy and utility. In Table 1, we add noise to each of the aforementioned algorithms to generate the obfuscated data and then calculate the F1scores of NILM attack. By adding noise to the energy consumption signature, our scheme has a better privacy-preserving level (Figures 5 and 6).

The
trade-off between privacy and computing consumption. Our IFed framework effectively transfers high computing consumption task to the grid side, where IoT terminal does not need to undertake any model training. As shown in Figure 7, the computing consumption of IoT terminal shows a sharp decline ( Table 2).

Conclusion
One fundamental challenge to LDP in Power IoTs is to show how to achieve the trade-off between utility and privacy while maintaining execution in naive IoT terminal. It is very different to the trade-off that has   This figure shows that the data loss of advanced obfuscation mechanisms with IFed is lower than other regular obfuscation mechanisms. The solid line represents the error of advanced obfucation mechanism with IFed and the dotted line represents the error of Barbosa's scheme. Figure 6. Privacy comparison.
This figure shows that after obfuscation by SCRAPPOR with IFed with same privacy budget, most appliances' F1-scores decline more than the previous algorithms. The blue bar represents the NILM attack F1-score on raw data, the red bar represents the NILM attack F1-score on obfuscated data with Barbosa's scheme, and the orange bar represents the NILM attack F1-score on obfuscated data with SCRAPPOR with IFed.  The blue bar represents the time consumption of the SCRAPPOR and FHMM algorithms without IFed, the red bar represents the time consumption of the SCRAPPOR and FHMM algorithms with IFed, and the orange bar represents the time consumed for only receiving and sending raw data without any obfuscation. Using advanced obfuscation mechanisms without IFed, the time consumption in IoT terminal tends to dozens of minutes, in contrast to advanced obfuscation mechanisms without IFed.