An efficient and secure data auditing scheme based on fog-to-cloud computing for Internet of things scenarios

With the widespread use of fog-to-cloud computing–based Internet of things devices, how to ensure the integrity of the data uploaded to the cloud has become one of the most important security issues. This article proposes an efficient and secure data auditing scheme based on fog-to-cloud computing for Internet of things scenarios, which can better meet performance and security requirements. The proposed scheme realizes data sharing under the condition of protecting privacy by encrypting sensitive information. Using the private key separation method, the private key is divided into two parts using identity information generation and random selection which are, respectively, held by the user and the fog center. Then, using the two-time signature method, the Internet of things and fog computing center use two parts of the private key to generate the original signature and final signature in two separate times. Since the fog computing center only has a part of the private key generated using the identity information, the security of the system will not be damaged due to the leakage of part of the private key held by the fog center, and the fog center significantly participates in the signature generation process, which significantly reduces the computation and communication overhead of the Internet of things device. Security analysis and performance evaluation show that the proposed scheme is safe and efficient.


Introduction
With the introduction and widely used of Internet of things (IoT) devices in our life and industrial production, Cisco first proposed fog computing in 2012 1 is also widely considered as a new paradigm that can effectively support for distributed, delay sensitive, and quality of service (QoS)-aware. As an intermediate device between the IoT device and the cloud, the fog computing center contains multiple fog nodes. The fog node has basic computing power, storage capabilities, and network resources to meet the data preprocessing and transmission requirements. 2 Therefore, the fog computing is an effective and practical solution from the perspective of delay constraints and resource constraints. It is estimated that the data currently generated by IoT devices have exceeded half of the total 1 data. 3 In 2019, the data generated by IoT devices will reach 500 ZB, which will be preprocessed by fog and then outsourced to the cloud for further analysis and storage. As an emerging technology, fog-to-cloud computing still faces many security challenges. 4 One of the most concerned issues is how to ensure the integrity and correctness of important data uploaded to the cloud. Like traditional cloud storage, the cloud service provider (CSP) also tries to hide the fact that data are corrupted because of its own benefits or reputation, 5 and even deletes some data that users rarely access to save storage space. 6 To this end, it is important to design an efficient and secure cloud data auditing scheme. At present, many cloud data auditing schemes have been proposed. [7][8][9][10][11] However, these schemes cannot be directly applied to IoT devices based on fog-to-cloud computing. 12 The reasons are as follows: (1) the signature generation overhead in traditional auditing scheme for cloud storage is too high, and it is difficult to apply to IoT devices with low computing power and (2) the traditional auditing scheme for cloud storage does not involve fog nodes, but the fog node plays a very important role in the fog-to-cloud computing, which can help IoT devices to efficiently process and transmit data.
There are many successful applications for IoT based on fog-to-cloud computing. One of the most common situations is users use sensors to collect the required data, such as related environmental data. Of course, users can also set multiple sensors to collect different areas or different types of data and transmit them to the fog node. Each fog node is a small service device provided by a service provider that can simply process and analyze data. Multiple fog nodes form a fog computing center and a management node is set up to manage all fog nodes. Finally, all data are outsourced to the cloud for cost-effective storage and other future uses. 13 In order to ensure the integrity of the data, the sensor should create verifiable metadata before sending the collected data to the fog node; for the received data, the fog node must first verify its correctness, and then perform local analysis and processing, and for the processed data, fog nodes should create new verifiable metadata. Finally, all data and its verifiable metadata are transferred to the cloud for long-term storage. The data owner or any other authorized party should be able to verify the integrity of the data anytime, anywhere, that is, remotely audit the correctness of the data. It is the very purpose of this work to design a secure and efficient auditing scheme for data storage based on fog-to-cloud computing for IoT scenarios.
Compared with the existing scheme, the proposed scheme reduces the user's computation overhead through the two-time signature and private key separation method, so that the proposed scheme can be better used for IoT devices with low computing capabilities. At the same time, the proposed scheme will verify the system parameters and the file tag during the signature generation and the proof generation to ensure the security of the scheme.

Our contributions
In this article, we proposed an efficient and secure data auditing scheme based on fog-to-cloud computing for IoT scenarios (DAFCI). Contributions can be summarized as follows: 1. We use the method of separating the private key, which divides the private key into two parts, and part of the private key sk 0 was randomly selected and held by the IoT device. The other part of the private key sk 00 is generated by the private key generator (PKG) based on the user's identity information. This part of the private key will be held by fog nodes and used to generate the final signature. This method improves the security of the system. 2. We then propose the two-time signature method, which divided the signature process into two phases: original signature and final signature. The first is that the sensor uses sk 0 to generate and send the signature set to the fog node. The fog nodes use sk 00 to generate final signature based on the original signature. This method improves the security of the system. 3. We formally prove the security of the proposed scheme, and evaluate its performance by theoretical analysis and experimental comparisons with other schemes. The results show that the proposed scheme can effectively achieve secure auditing in fog-to-cloud computing, and outperforms the others in computation complexity and energy consumption.

Organization
The rest of this article is organized as follows. In section ''Related work,'' we review the related work on cloud data auditing, especially the public auditing schemes. In section ''Symbols and preliminary knowledge,'' we present notations and preliminaries. In section ''System model and security model,'' the system model and security model are presented. In section ''The proposed scheme,'' we introduced the proposed scheme. In section ''security analysis'' and section ''performance evaluation,'' the security analysis and the performance evaluation are given, respectively. Finally, in section ''Conclusion,'' we conclude our article.

Related work
With the increasing popularity and importance of IoT, the security and privacy issues of IoT have attracted widespread attention from industry and academia. 14 Many fruitful and relevant studies have been carried  out, such as how to implement secure communication  between IoT nodes, 15 how to preprocess service privacy  while providing users with secure data services in IoT  applications, 16 and how to ensure the integrity of the message moves the entire IoT infrastructure. 17 In most applications, the sensor device collects the data and generates verifiable metadata, and then uploads the metadata to the fog node. After receiving the metadata, the fog node will first verify the metadata. If the result is true, the metadata will be further processed, then upload it and the data to the cloud for long-term storage. Data owners can verify at anytime to ensure the correctness and integrity of the data stored in the cloud, that is, the data owner can perform remote integrity auditing of the data. The auditing model is divided into private auditing and public auditing. The verification operation of the private auditing is only carried out between the CSP and the user, without the intervention of a third party, 18 and the public auditing in which the verification operation is customarily done by an authorized third-party auditor (TPA). 19 Public auditing is generally considered to be a more effective approach because it provides more convincing results while significantly reducing the user's computation and communication overhead. At present, data integrity auditing based on IoT devices has received more and more attention. Aazam et al. 20 believed that IoT devices use the cloud to process and store data. There are still some problems that must be solved, because the cloud is not trusted, so users cannot ensure that the cloud correctly stores the data uploaded by IoT. For data stored on the cloud server, the provable data possession (PDP) scheme 21 can effectively verify the integrity of the cloud data and achieve blockless verification, that is, without downloading the original data. Integrity verification ensures storage security, but the computational and communication overhead of this scheme is too large to be applied to IoT devices. Xu et al. 22 proposed a distributed fully homomorphic encryption-based Merkle Tree (FHMT) scheme, which effectively solves the cloud credibility problem using blockchain technology, but their scheme does not implement public auditing. Yu et al. 23 proposed an identity-based private key generation auditing scheme, which reduces the overhead of certificate management, but the overhead of signature generation and auditing proof verification is still large to the IoT device. Zhu et al. 24 proposed a short signature-based data auditing scheme, which reduced the computational overhead of signature generation. However, their proposed scheme is not based on fog-to-cloud computing. The communication and computation overhead of IoT devices is still very high. Tian et al. 25 reduced the overhead of signature generation by introducing fog nodes, but in their scheme, fog nodes are considered to be credible, but in reality, the fog node used by most users is not a trustworthy entity, the same as the cloud. Huang et al. 26 proposed the Efficient Versatile Auditing Scheme for IoT-Based Datamarket in Jointcloud to solve the single-point-of-failure (SPoF) of cloud server using the blockchain to record the data flow, but it still does not solve the problem that the signature generation overhead of the cloud auditing scheme is too high, so it cannot be directly applied to low computing IoT devices.
Li and Yu 27 proposed a cloud auditing scheme based on threshold secret sharing, but compared to this, the two-time signature method has two advantages as follows: 1. One of the characteristics of threshold secret sharing scheme is that when the number of people who want to unlock the secret reaches a predetermined value, it can be decrypted even without other parts of private key. But this method is difficult to apply directly to the IoT scenarios, because if the private key is assigned to sensors, when the parts of private key held by some sensors are leaked, the adversary can forge a signature based on some of the private keys to access sensitive data. 2. The above-mentioned schemes and other existing schemes cannot well meet the requirements of low computing overhead and high security of IoT devices, so we try to propose a cloud auditing scheme that can better meet the requirements of the IoT environment.
In the secret sharing scheme with the threshold, parts of private key are different. However, in the proposed scheme, the part of private key held by all sensors in the same system is the same, which enables even different types of sensors can generate the same signature, ensuring the consistency of the signature and facilitating verification.
In the proposed scheme, the fog-to-cloud computing is introduced, but the fog is designed as an untrusted entity. The method of private key separation and twotime signature avoids the situation that the private key is directly leaked by the fog node, and the security is destroyed due to the leakage of the private key, improving efficiency and security.

Notions
The symbols and corresponding descriptions appearing in the DAFCI scheme can be seen in Table 1. Preliminary 1. Bilinear map: let G 1 , G 2 be two multiplicative cyclic groups of same large prime order p, and g be a generator of G 1 . Bilinear map is a map e : G 1 3 G 1 ! G T with the following properties: 2. (a) Bilinearity: for all g 1 , g 2 2 G 1 and a, b 2 Z Ã p e g a 1 , g b 2 À Á = e g 1 , g 2 ð Þ ab (b) Computability: there exists an efficiently computable algorithm for computing map e. (c) Non-degeneracy: e(g, g) 6 ¼ 1.

Computational Diffie-Hellman (CDH) prob-
lem: for unknown x, y 2 Z Ã p , given g, g x as input, outputs g xy 2 G 1 . The CDH assumption in G 1 holds if it is computationally infeasible to solve the CDH problem in G 1 .

Discrete logarithm (DL) problem: for unknown
x 2 Z Ã p , given g, g x as input, outputs x. The DL assumption in G 1 holds if it is computationally infeasible to solve the DL problem in G 1 .

System model
The system model of the DAFCI scheme is shown in Figure 1. The model mainly includes five different entities: CSP, sensor, PKG, fog computing center (FCC), and TPA, as follows: Multiplicative cyclic groups with order p e A bilinear paring map e : The set of the indexes of the data blocks corresponding to the sensitive information sk 0 The partial private key selected and hold by the sensor sk 00 The partial private key selected by PKG and sent to the FCC  CSP: CSP provides enormous data storage and data processing service to the users. Sensor: the sensor device is responsible for collecting data, then hiding the sensitive information and generating the original signature, and uploading it to FCC. A system can contain multiple sensor devices. Multiple sensors belonging to the same system can be of different types, collecting different data separately, but they have the same private key. FCC: the FCC consists of several fog nodes, each of which is responsible for generating a final signature for the sensors in the area to reduce the computational overhead of the sensor. PKG: PKG is responsible for generating part of the private key sk 00 for sensors and system public parameters. The sensor will check sk 00 generated by PKG. TPA: TPA is a third-party auditor. It helps sensors audit the integrity of the data stored in the cloud.
After the sensor collects the data, the sensitive information is blinded. Since the proportion of this part of the data is small, the process overhead is very low. The original signature is then generated based on a portion of the private key and sent to the FCC. After receiving the original signature, the FCC uses its holds partial private key to generate the final signature and upload it to the CSP.
When the sensor wants to verify the cloud data, it sends an audit request to the TPA. After receiving the request, the TPA selects challenge blocks according to the block number contained in the instruction and initiates an audit challenge to the CSP. The CSP then sends the data auditing proof back to the TPA. Finally, the TPA will verify the proof and inform the result to the sensor.

Design goals
In order to effectively ensure the secure storage of sensor data in the cloud, the DAFCI is designed to achieve the following goals.

The detectability
The scheme should have a high enough probability to detect damage of the cloud data. When the proportion of damaged blocks is r, the probability of finding a damaged block during the audit is at d least. This probability is called the probability of detection. At the same time, this scheme is called (r, d) detectable.
The correctness. The correctness includes the correctness of the private key and the correctness of the auditing: The correctness of the private key: verify the partial private key sk 00 generated by PKG to ensure its correctness. The correctness of auditing: TPA verifies the auditing proof generated by CSP to ensure the integrity of sensor data.

The soundness
The soundness requires that any false proof pass verification probability is negligible. In other words, only the CSP that correctly stores the data can generate the correct auditing proof.
Definition Definition 1. The DAFCI scheme consists of six algorithms: Setup, Extract, SigGenU, SigGenF, ProofGen, and ProofVerify. These algorithms are described as follows: Setup(1 k ): this algorithm is a setup algorithm executed by PKG. It takes as input a security parameter k and outputs the system public parameters pp. Extract(PP, ID): the algorithm consists of two parts, one part is the extraction algorithm run by PKG. It takes as input the system public parameters pp and the user's identity ID. It outputs the sensor's partial private key sk 00 . The sensor will verify this part of the private key and accept it if sk 00 can pass the verification. Another part of this algorithm is performed by the sensor. Sensor selects x as another part of the private key sk 0 . SigGenS(F, sk 0 , name, ssk): this algorithm is run by the sensor. It takes as input the signature private key ssk, the original file F, the partial private key sk 0 of sensor, and the file identifier name. It outputs blind file F Ã , corresponding original signature set F and file tag t. SigGenF(sk 00 , name, F): the algorithm is run by the FCC. It takes as input the sensor part private key sk 00 generated by PKG, the file name name, and the first signature set F, and it outputs the final signature set s. ProofVerify(pp, P, chal): this algorithm is a verification algorithm executed by the TPA. It takes as input the auditing challenge chal, the system public parameters pp, and the auditing proof P. TPA will verify and inform the sensor auditing results based on the above information.

Security model
In the DAFCI scheme, the PKG may be untrusted because the PKG only generates a partial private key sk 00 according to the identity information of the system to which the sensor belongs. The sensor also undertakes the generation of partial private keys sk 0 , which will further improve the security of the system. TPA is also honest, and it helps sensor auditing the data stored in cloud and reduce the burden on the sensor. Fog node is used to help the sensor generate the corresponding signature, and it is not completely trusted. The security model of the DAFCI scheme is implemented through the following adversaries and challengers. The game is introduced as follows: Setup phase: the challenger C runs the Setup algorithm to get the private key sk and public parameters pp, and then sends the public parameters pp to the adversary A. Query phase: at this stage, adversary A will make the following two queries to the challenger C. Extract queries: the adversary A sends the identity information ID to challenger C. The challenger executes the Extract algorithm to get the private key sk ID , and then sends it to the adversary A. SigGen queries: the adversary asks for the corresponding signature of file F. The challenger executes the Extract algorithm to obtain the private key. Challenger C then runs the SigGen algorithm to calculate the signature corresponding to file F and sends it to adversary A. Challenge phase: at this stage, the adversary will act as a prover and the challenger will act as a verifier. The challenger C sends the challenge block chal = fi, v i g i2I to the adversary A, where The challenger asks the adversary to provide the data holding proof P of the corresponding data block fm v1 , m v2 , . . . , m vc g. ProofGen phase: after receiving the challenge of challenger C, the adversary A generates a corresponding proof P of the chal block and sends it to the challenger C. If it is proved that P can pass the verification with a non-negligible probability, it can be considered that the adversary A has won the game.
For the above security model, it is necessary to prove that if the adversary cannot have all the challenge blocks required by the challenger, it cannot generate the correct proof P, which means that the proof of the adversary's forgery cannot be verified. The purpose of the adversary A is to correctly generate the corresponding signature of the challenge chal without holding the data blocks.
Definition 2. We consider a remote data integrity auditing scheme is secure if the following condition holds: whenever the adversary A in the aforementioned game is able to generate a valid proof P to pass the challenge of the challenger C with a non-negligible probability, there is a knowledge extractor that can capture the challenged data block, but the probability can be neglected.
Definition 3. If only the data owner can view the sensitive information in the file, the CSP and other shared sensors can only view the non-sensitive information, then the sensitive information in the file is safe.

An overview
The sensor has lower computing power and shorter battery life. In order to minimize the energy consumption of the sensor device, the DAFCI scheme uses the blockless verification technology to enable the sensor to complete the integrity verification without downloading the blocks stored in the cloud. The private key separation method and the two-time signature method are used to further reduce the computation overhead of the sensor. The private key separation method refers to dividing the private key sk into two parts to calculate and generate, that is, sk = (sk 0 , sk 00 ). The two-time signature method means that the initial signature F = (s i ) i2½1, n is generated by the sensor and uploaded to the FCC, and then the FCC generates the final signature F 0 = fs 0 i g(i 2 ½1, n). In the DAFCI scheme, PKG uses the identity information received from the sensor to generate the partial private key sk 00 . The sensor can verify the correctness of this part of the private key. The other part of the private key sk 0 is selected by the sensor, and no additional verification is required. Before the sensor uploads data to the cloud, the file needs to be blinded. Other shared sensors can only access non-sensitive information, not any content that involves sensitive information. The sensor can recover the blinded file when needed. The signature will be used to verify the integrity of the data, which is split into two generations. After the file is blinded, the sensor device generates the original signature. After the original signature is generated, the FCC performs a two-time signature to generate a final signature set. Finally, the fog node sends the signature set and the blind file to the CSP and sends the file tag to the TPA. When the sensor wants to check the integrity of the data stored in the cloud, it sends an audit request to the TPA. After receiving the request, the TPA challenges the CSP and asks the CSP to return the corresponding auditing proof. The TPA verifies the auditing proof and determines whether the CSP stores the sensor's data correctly, and sends the auditing result to the sensor. The specific details are described in the following section.
In addition, the proposed scheme can be further applied with edge-cloud computing. A large amount of data are collected by the sensors set in the system and analyzed and processed by the edge end close to the user, while being scalable. Users get real-time insights and experiences with responsive and context-aware applications.

Description of the proposed scheme
In the DAFCI scheme, the original file F is divided into n blocks F = fm 1 , m 2 , . . . , m n g i2n , and m i 2 Z Ã p denotes the iÀth block of file F. Assuming that the length of the sensor identity information ID is l bits, it can be described as fID 1 , ID 2 , . . . , ID l g 2 f0, 1g l . Usually, sensitive information only accounts for a small part of whole files, and only needs to be blinded, so the blinding overhead is small. After the blind file F Ã = fm Ã 1 , m Ã 2 , . . . , m Ã i g i2n is formed, the sensor uses the private key sk 0 = x to generate the original signature s i = fH(m Ã i ) Á xg, the private key sk 0 is selected and saved by the sensor device controlled by the sensor, not known by other entities, and then the original signature set F = (s i ) i2l is sent to the FCC. The FCC generates a final signature s 0 according to a part of the private key generated by PKG, sk 00 = Q l j = 1 m ID j , and generates the final signature set F 0 = fs 0 i g i2½1, n . The two-time signature method is shown in Figure 2.
Since the DAFCI scheme divides the private key into two parts, which are generated by PKG and sensor, respectively, the FCC only has the part calculated by PKG, so even if the FCC leaks part of the private key it owns, it is still difficult to forge the correct signature.
The following is a detailed description of the DAFCI program:

Algorithm Setup(1 k )
The PKG chooses two multiplicative cyclic groups G 1 and G 2 of a same prime order p, a generator g of G 1 , a bilinear pairing e : G 1 3 G 1 ! G 2 , and a pseudorandom function f : Z Ã p 3 Z Ã p ! Z Ã p . The PKG randomly chooses an element x 2 Z Ã p , then chooses elements m 0 , m 2 G 1 , and a cryptographic hash function H: f0, 1 Ã ! G 1 g. The PKG publishes the system parameters pp = (G 1 , G 2 , p, e, g, m 0 , m, H, f ).

Algorithm Extract(pp, ID)
The sensor sends the identity information ID = fID 1 , ID 2 , . . . , ID l g 2 f0, 1g l to PKG. After receiving the identity information ID, the PKG calculates partial private key sk 00 = Q l j = 1 m ID j and partial public key pk 00 = P sk 00 , and then sends sk 00 and pk 00 to the sensor. After the sensor receives sk 00 and pk 00 , it will verify the correctness of the received private key by checking the following equation holds or not If the above equation does not hold, the sensor refuses to accept it; if the equation (1) holds, the sensor will calculate another partial private key sk 0 and public key pk 0 = x Á p, that is, the private key sk = (sk 0 , sk 00 ) and the public key pk = (pk 0 , pk 00 ). The above process is illustrated in Figure 3.
3. Algorithm SigGenU(F, name, sk 0 ) The sensor will first blind the file, and the sensitive information blind method here draws on the scheme of Shen et al. 28 The sensor randomly selects a seed k 1 2 Z Ã p as the input private key of the random number generation function f . The sensor uses the seed k 1 to generate a blinder a i = f k 1 (i, name)(i 2 K 1 ) and then uses a i to encrypt sensitive information in the file. name 2 Z Ã p is a randomly selected random number that identifies the file.
In order to protect sensitive information, the sensor should blind data blocks corresponding to the sensitive information of the original file F before sending it to the FCC. The index of these data blocks is in set K 1 . The sensor computes the blinded data block m Ã i = m i + a i for each data block m i 2 Z Ã p (i 2 K i ) of the original file F. The blinded file is F Ã = fm Ã 1 , m Ã 2 , . . . , m Ã i g, where i 2 ½1, n and i 6 2 K 1 , otherwise m i 2 Z Ã p (i 2 K 1 ). After generating the blind file F Ã = fm Ã 1 , m Ã 2 , . . . , m Ã i g, the sensor then uses a hash function H to compute its hash value H(m Ã i ), then uses the hash value and part of the private key x generates an original signature s i . The calculation process is as follows: The original signature set is F = fs i g i2½1, n . The sensor computes t 0 = name k m 0 and then generates the corresponding file tag Ssig ssk (t 0 ), where Ssig is the signature of the file tag t 0 using the signing private key ssk. Finally, the sensor sends fF, t 0 , F Ã , sk 00 , pk 00 g to FCC for further computing. 4. Algorithm SigGenF(sk 00 , F) The FCC first checks whether the signature Ssig ssk (t 0 ) is valid. If it is a valid signature, the FCC parses the file tag t 0 to obtain the file name and verification value m 0 , and then does the next step.
The FCC further verifies the correctness of the original signature s i (i 2 ½1, n) by verifying whether the following equation holds If the above equation holds, then FCC considers that the original signature s i is correct and can proceed the following step. The FCC is responsible for computing the for the sensor, and the final signature set is F 0 = fs 0 i g i2½1, n , then sends fF Ã , F 0 g to the CSP, and sends the file tag t to the TPA. 5. Algorithm ProofGen(F Ã , F 0 , chal) The sensor or the TPA is authorized to challenge the CSP from time to time. The TPA checks whether the file tag t is valid. If it is invalid, the TPA will not perform the auditing request; if it is valid, the TPA obtains the file identity name and the verification parameter m0 by parsing the file tag t 0 , and then generates the auditing challenge chal. The challenge generation process is as follows: (a) TPA randomly selects c elements from the data block index set to form set I. Set I 2 ½1, n.  The TPA will verify the correctness of the auditing proof as follows If equation (3) holds, the data stored in the CSP are intact, otherwise, it is not.

Security analysis
In this section, we prove the DAFCI is secure in terms of correctness, soundness, and detectability.   (2) can be proved correct by deducing the left-hand side from the right-hand side Theorem 2 (the soundness) 1. Privacy of private key: in our scheme, the FCC is considered to be an incompletely trusted entity, and it may leak the partial private key.
The solution needs to consider that in the case that some of the private keys held by the FCC are leaked, it can still be guaranteed that the private keys held only by the sensor cannot be known through other information calculations. 2. Anti-replace attack: during the TPA verification process, replace attacks would never be effective. In other words, if the CSP does not store data blocks or signature correctly, it is impossible for the CSP to succeed in replacing such a data block with another block to respond to the challenge to maintain this reputation. 3. Anti-reply attack: during the TPA verification process, reply attack would never be effective.
In other words, a malicious CSP could not pass the verification using the signature generated by previous auditing.

Proof
1. The private key consists of two parts, namely, sk 00 generated by PKG based on the identity information provided by the sensor and sk 0 selected by the sensor. sk 00 was sent to the FCC for the second signature, but since the first signature was performed on the sensor device, even if the FCC leaked part of the private key it holds, it was still impossible to obtain sk. 2. The TPA selects some random blocks for auditing, and sends some challenge blocks chal = fi, v i g i2½1, s to the CSP. Assume that the TPA is currently under replay attack, that is, the CSP tries to use the previous block signature information and replaces part of the correct signature to pass verification, and then TPA verifies the received proof s À e s À , P ð Þ= e s, P ð Þ ð4Þ If equation (4) holds, the replacement attack is successful. The received proof is In order for equation (4) to be true, there is If equation (5) holds, there are s 0v i j À = s 0v i i and Due to the privacy of the private key, even if sk 00 is leaked, the CSP cannot forge sk 0 , nor can it forge the correct signature. In other words, replay attacks in DAFCI are impossible to succeed.
3. The replay attack game is similar to the replacement attacks mentioned above. The sensor also authorizes TPA to auditing. After receiving the auditing request, the TPA randomly selects several data blocks in the indicated file and generates the challenge chal = fi, v i g i2½1, s . In this scenario, the CSP attempts to use a forged signature to pass verification. The proof of antireplay attacks is similar to the proof of antireplacement attacks, which is not repeated here.
Theorem 3 (the detectability). Assume that the original file F is divided into n blocks and encrypted to generate a blind file F Ã then stored in the cloud. Among them, k blocks are modified or deleted by the CSP and the number of challenge blocks is c. At this time, the detectability of the DAFCI scheme is k n , 1 À n À k n c X is the intersection of the damaged block and the challenge block. Then, there are From the above equation, we can know that 1 À n À k n c ł Px ł 1 À n À c + 1 À k n À c + 1 c because n À i À 1 À k=n À i À 1 ł n À i À k=n À i, so the probability that the DAFCI scheme detects a damaged block is at least 1 À (n À k=n) c . Suppose the proportion of damaged blocks is 1%, in other words when n À k=n = 0:99, assuming that the challenge block is 300, the detection probability can reach 95%, and assuming that the challenge block is 400, the detection probability can reach 99%.
Theorem 4 (sensitive information hiding). Sensitive information cannot be accessed by any entity except the data owner, including FCC, TPA, CSP, and shared sensors.
Proof. This part borrows from the encryption scheme of Shen et al., 29 but considering the performance of the sensor device, it is lightweight. Before the sensor has hashed the file block, the file has been blinded first, so under the random oracle model, only the sensor can access sensitive information.

Performance evaluation
In this section, the functional comparison of the DAFCI scheme and some other related schemes will be performed first, and then the computation overhead will be compared. Then, the communication overhead and computational complexity of the DAFCI scheme will be discussed. Finally, the specific performance of the DAFCI scheme is demonstrated through experiments.

Functionality comparison
As shown in Table 2

Performance analysis and comparison
First define H for a hash operation, Mul for a multiplication operation, Add for an addition operation, P for a bilinear pairing operation, and e for an exponential operation. n is the number of file blocks, c is the number of challenge blocks, and l is the length of the identity information. d is the number of file blocks containing sensitive information. n j j represents the size of the set ½1, n, p j j represents the number of elements in Z Ã p , and q j j represents the number of elements in G 1 .
1. Computation overhead comparison: computation overhead refers to the cost of corresponding computing tasks performed by the corresponding entity, including signature generation, proof generation, and proof verification. Table 3 compares the computation overhead of the DAFCI scheme with the scheme of Shen et al. 29 and the scheme of Tian et al. 25 In section ''System model and security model,'' we introduced how to encrypt sensitive information. Its computational overhead is d Á Add. As mentioned earlier, due to d ( n, this part of the overhead is small, so it is not reflected in the table. The computation overhead of the sensor to generate the original signature is n Á (H + Mul). The original signature and related parameters are then sent to the FCC, and the overhead of the FCC to generate the final signature is n Á (2Mul + 2e). The final signature is then sent to the CSP. During the challenge process, the overhead of generating the auditing proof is c Á Mul + c Á e, and the overhead of TPA verification after receiving the proof is (c + 1) Á Mul + 2e + 3P. As shown in Table 3, compared with the related schemes, the computation overhead of the DAFCI scheme is lower. On the sensor side, that is, the overhead reduction is more obvious. Compared with the scheme that also introduces FCC, the computation overhead of the fog node is slightly higher, but it is not obvious. The generated overhead is consistent with scheme B of Tian et al., 25 overhead incurred when transferring files or related parameters between different entities. Reducing communication overhead can increase solution stability and practicability. Therefore, the increase in communication overhead caused by the introduction of the FCC is negligible compared with the auditing process. Therefore, only the overhead of the auditing process needs to be considered. In this part, the TPA sends a challenge block chal = fi, v i g i2I to the cloud. The size of the challenge block is c Á ( n j j + p j j) bits. The CSP generates a certificate P = fsg with size q j j bits. Therefore, in an audit task, the communication overhead is c Á ( n j j + p j j) + q j j. 3. Computation complexity: we analyze the computation complexity of the different entities in different phases in Table 5. The number of challenge blocks c, the total number of file blocks n, and the number of blocks containing sensitive information d determine the corresponding time complexity of each entity of the DAFCI solution. From Table 5

Performance evaluation
In this section, we evaluate the performance of the proposed scheme by several experiments. The experiment was implemented using the Ubuntu 16.04 operating system with an Intel Core i5 3.0 GHz processor and an 8 GB memory. The programmer is written in C program, and it uses the library functions in the pairingbased cryptography (PBC) library to simulate the cryptographic operations, where the benchmark threshold is 512 bits, the size of the element is p j j = 160 bits in Z Ã p , the size of the file is 20 MB, and the length of user identity to be 160 bits.
The experimental results are the averages of the 10 experiments.
1. Performance of different process : to effectively evaluate the performance in different processes,    we set the number of data blocks to be 100 and the number of blinded data blocks to be 5 in our experiment. As shown in Figure 4, private key generation spends nearly 0.26 s. The time consumed by the original signature generation is 0.312 s. The time of the final signature generation is 2.4 s. The time of signature verification and that of sensitive information blindness, respectively, are 0.22 and 0.031 s. So we can conclude that in these processes, the final signature generation spends the longest time. 2. Performance of signature generation: to evaluate the performance of signature generation and signature verification, we generate the signatures for different numbers of blocks from 0 to 1000 increased by an interval of 10 in our experiment. As shown in Figure 5, the time cost of the original signature generation, the final signature generation, and the signature generation with DAFCI all linearly increases with the number of the data blocks. The time of original signature generation ranges from 0.312 to 3.12 s. The time of final signature generation ranges from 2.4 to 24.0 s. The time of signature generation without DAFCI ranges from 2.712 to 27.1 s. 3. Performance of auditing : Figures 6 and 7 show the TPA and CSP auditing overheads for different numbers of challenge blocks, respectively. Considering that the detection probability has reached 99% when the number of challenge blocks is 400, only the case where the challenge blocks are from 50 to 500 is shown in the figure.
As shown in Figures 6 and 7, it can be seen that the computation overhead of the challenge generation and proof verification increases linearly with the number of challenge data blocks. The computation overhead of the challenge generation increases from 0.097 to 0.486 s as the challenge block increases. The overhead of the proof verification increased from 0.5 to 11 s. It can be concluded that compared with the challenge verification overhead, the challenge overhead is lower and grows slowly. In summary, it can be concluded that as the challenge block increases, the computation overhead of both TPA and CSP will increase.

Conclusion
This article proposed an efficient and secure public cloud data auditing scheme based on IoT scenarios.
Through the encryption of sensitive information, the separation of private key method, and the two-time signature method, data sharing under privacy protection   is realized, which reduce the computation and communication overhead of IoT devices, while ensured security. The security analysis shows that the DAFCI scheme is safe under the random oracle model. Performance analysis shows that compared with traditional cloud data auditing schemes and other schemes using fog-to-cloud computing, the DAFCI scheme is more efficient and has certain advantages, which can be better applied to low power IoT devices. However, considering the rapid development and widespread application of IoT technology, further reducing the overhead of computation and communication under the condition of ensuring the security of the solution will still be the focus of work in the future for a long time.

Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported, in part, by the National Natural Science Foundation of China (grant no. 61802106) and the Natural Science Foundation of Hebei Province (grant no. F2016201244).