Loading [MathJax]/extensions/TeX/mathchoice.js
LI Yumei, ZHANG Futai. Remote Data Auditing for Cloud-Assisted WBANs with Pay-as-You-Go Business Model[J]. Chinese Journal of Electronics, 2023, 32(2): 248-261. DOI: 10.23919/cje.2020.00.314
Citation: LI Yumei, ZHANG Futai. Remote Data Auditing for Cloud-Assisted WBANs with Pay-as-You-Go Business Model[J]. Chinese Journal of Electronics, 2023, 32(2): 248-261. DOI: 10.23919/cje.2020.00.314

Remote Data Auditing for Cloud-Assisted WBANs with Pay-as-You-Go Business Model

Funds: This work was supported by the National Natural Science Foundation of China (62172096)
More Information
  • Author Bio:

    LI Yumei: Yumei LI was born in Shandong Province, China. She received the Ph.D. degree in mathematical sciences from Nanjing Normal University. She is currently a Lecturer in the Hubei University of Technology. Her research interests include linearly homomorphic signature and cloud storage security. (Email: leamergo@163.com)

    ZHANG Futai: Futai ZHANG (corresponding author) received the B.S. and M.S. degrees in mathematics from Shaanxi Normal University, China, and Ph.D. degree from Xidian University, China. He is currently a Professor at Fujian Normal University. His main research interests include cryptography and applications of cryptography in cyberspace security. (Email: futai@fjnu.edu.cn)

  • Received Date: September 23, 2020
  • Accepted Date: December 13, 2021
  • Available Online: June 30, 2022
  • Published Date: March 04, 2023
  • As an emerging technology, cloud-assisted wireless body area networks (WBANs) provide more convenient services to users. Recently, many remote data auditing protocols have been proposed to ensure the data integrity and authenticity when data owners outsourced their data to the cloud. However, most of them cannot check data integrity periodically according to the pay-as-you-go business model. These protocols also need high tag generation computation overhead, which brings a heavy burden for data owners. Therefore, we construct a lightweight remote data auditing protocol to overcome all above drawbacks. Our work can be deployed in the public environment without secret channels. It makes use of certificate-based cryptography which gets rid of certificate management problems, key escrow problems, and secret channels. The security analysis illustrates that the proposed protocol is secure. Moreover, the performance evaluation implies that our work is available in cutting down computation and communication overheads.
  • Wireless body area networks (WBANs) are often used to improve the quality of medical treatment and guarantee the human health-care service [1]. It relies on all kinds of sensors to collect medical data and fulfill remote vital signs monitoring of patients. The scale of medical data grows over time, and the storage burden will lead the device to be inefficient. Cloud computing as an auxiliary means provides flexible storage capability and cheap services for data owner. Cloud-assisted WBANs overcome the inherent weaknesses of traditional WBANs and enable the data owner to store and process the collected data conveniently [2], [3]. However, it faces kinds of internal attackers and external attackers. A dishonest cloud service provider (CSP) can mask medical data corruption or data loss to maintain an excellent reputation. Even worse, a malicious adversary can disturb diagnostic results through falsifying some medical data. The incorrect diagnostic results may delay the treatment of patients and cause serious medical incidents. Among these security issues, the integrity auditing of outsourced data is crucial.

    Downloading the entire file is the intuitive method to check the integrity of data [4]. However, this is not practical because the solution is inefficient. Remote data auditing is a popular model that allows a party to check data integrity without downloading all data contents [5]. It generates probabilistic proof by sampling random sets of data blocks. Recently, scholars have proposed many schemes to check outsourced data integrity [6]-[12], each has its pros and cons. A common weakness of these schemes is that they only support content integrity checking. It is not applicable to the CSP which using the pay-as-you-go business model [13]. In pay-as-you-go business model, a data owner pays fees for the data in each period based on the storage volume indiscriminately. Therefore, a third party auditor (TPA) should have the ability to check the data integrity and authenticity periodically.

    In the pay-as-you-go model, data owners only need to pay for uncorrupted files according to actual storage volume. The CSP charges the storage fee based on the data storage conditions. As shown in Fig.1, storage fee should comply with the following principles: 1) The data owner pays the storage fee for each period in a regular way if the file keeps intact; 2) If any data error is detected in auditing phase, the data owner will not pay the storage fee and the CSP should compensate for the damaged file; 3) If the data owner removes a file from the CSP, he/she pays the storage fee to CSP on demand by this date [14]. A remote data auditing (RDA) protocol which satisfies the above application should support integrity auditing of both content and time of storage (i.e., timestamp). An obvious solution is to attach a timestamp at the end of the outsourced data to mark the storage time. However, a weakness is that the timestamp may be lost or corrupted as there is no relationship between the timestamp and individual data blocks. The solution that an authentication tag generated for each data block includes the timestamp is efficient. It ensures a strong binding between the timestamp and individual data block.

    Figure  1.  Pay-as-you-go model integrated with RDA.

    Moreover, there exists another neglected problem that a data owner cannot prove the file has been uploaded to CSP. If a file is lost, the CSP may deny the fact that the file was stored in CSP. Most of existing auditing protocols only focus on partial data damage and loss. The case that the entire file was erased by the CSP is ignored. In this situation, the CSP can claim that it never received the file from the data owner. An effective way to solve the problem is that the CSP returns an unforgeable voucher to the data owner while receiving complete file. Using this method, there is no dispute that the CSP should undertake the obligations for all data damage and loss. Besides, the voucher should be updated by the CSP if the file keeps intact in the last period. It also indicates that the data owner has paid the storage fee for the previous period.

    In this paper, we design an efficient certificate-based remote data auditing protocol. Considering the computing power of WBANs, one of the main motivations of our work is to reduce the computation cost in tag generation. In the real world, the storage fee is one of the important factors for data owners to select a service provider. To increase competitiveness, it is necessary to reduce the size of relevant auditing information. Besides, a certificate-based cryptosystem (CBC) is preferable in cloud-assisted WBANs. It can be deployed in public channels without a honest and trusted third party.

    We construct a practical remote data auditing protocol for cloud-assisted WBANs with pay-as-you-go business model, which can audit data integrity regularly. The innovations of this paper are as follows.

    1) We design a novel auditing model that the TPA will audit data integrity spontaneously periodically according to the auditing period. Besides, the model can prevent CSP from hiding data damaged or lost to evade compensation.

    2) We construct a homomorphic verifiable tag, which has low tag generation overhead and verification overhead. Besides, the size of a data block tag is short.

    3) We put forward an efficient remote data auditing protocol according to pay-as-you-go business model. It is a valuable application of CBC in checking data integrity.

    Moreover, we show the correctness of our protocol and provide rigorous security proof under the random oracle model. We also compare the overhead of our protocol with several other related protocols in computation, communication, and storage. Besides, we show the performance of our work is desirable through experiments.

    All sensors in WBANs are usually assigned to collect and monitor users’ physical information [15]. However, low storage power and computing power limit the development of WBANs. With the development of cloud computing, the CSP can assist users in storing these sensors’ data. The integrity and authenticity of data stored in the CSP is widely concerned by users. The technique of proof of storage (PoS) allows a verifier to check the data integrity without holding a local copy [16]. It offers an optional solution for auditing outsourced data integrity [4].

    There are two interesting PoS models, namely proof of retrievability (PoR) and provable data possession (PDP). The notion of the former was presented by Juels et al. [17] in 2007. In this protocol, the data owner can retrieve the entire file. The notion of PDP was first proposed by Ateniese et al. [5] in the same year. They proposed two different PDP protocols which based on RSA cryptosystem and homomorphic verifiable tags (HVTs). The two protocols can both randomly choose some data blocks to detect files, and entire files are not required. To reduce the computation and communication overheads, Shacham and Waters [18] constructed a novel HVTs using BLS signature [19]. Later, various PDP protocols based on the construction has been proposed [20]-[30]. Yu et al. [21] proposed an identity-based remote data integrity verification protocol which reaches the perfect data privacy-preserving. Although the protocol is shaken off the burden of certificate management, the verification overhead increases linearly with the number of the challenged data blocks. Zhang et al. [29] introduced an identity-based cloud storage auditing protocol for shared big data with efficient user revocation. Li et al. [30] proposed a certificateless public data integrity checking protocol for data shared among a group. He et al. [26] presented a certificateless public auditing protocol for cloud-assisted WBANs. Their protocol does not suffer from public key certificate management and key escrow problem. Huang et al. [31] proposed a certificateless public verification scheme for data storage and sharing in the cloud. However, the storage space occupied by the data block tag in these protocols is larger than the data block itself.

    In 2013, Wang et al. [7] assumed that each block consisting of n sectors in their protocol and comprises n sectors’ signatures into one using homomorphism. That is to say, the size of a data block is (n×q)-bits and the size of its corresponding signature is q-bits. It also achieves the shortest query for a challenge. Wang et al. [32] proposed a remote integrity auditing protocol, which not only permits checking data content, but also allows checking the log information about the origin, type, and consistency of the data. Yan et al. [33] introduced a remote data possession checking protocol which supports dynamic data. Wu et al. [14] presented two protocols for pay-as-you-go business model which allows the data owner or the TPA to verify the integrity of data content and its timestamp. The protocol leaks no information on data content and timestamp to TPA. Thokchom et al. [34] proposed a data checking protocol for storage of shared dynamic data in untrusted CSP with privacy-preserving and revocation of users. However, there still exists an issue that the tag generation cost increases as the size of the file grows. Besides, the CSP has to store at least additional (n×q)-bits related information for file auditing.

    There are three types of parties called the data owner, the CSP, and the TPA in a remote data auditing system. The CSP is dishonest, it may generate forged proofs that can pass the verification. The TPA is honest, it performs the data auditing periodically on behalf of the data owner. Fig.2 shows the system model and we give the working process below.

    Figure  2.  The remote data auditing system.

    1) The data owner splits a file into data blocks, and generates tags for all data blocks and timestamp. The data owner then uploads these data blocks with the corresponding tags to the CSP.

    2) The CSP will store these information if the file is correct and return a voucher to the data owner. The voucher indicates that the data is intact at this time. Note that, the CSP will update and return the voucher after receiving the storage fee.

    3) The TPA initiates a challenge to the CSP for file auditing at the end of each period. The CSP generates a proof and sends it to TPA.

    4) The TPA checks the correctness of the proof and returns a result to the data owner. The data owner pays the storage fee according to the pay-as-you-go model.

    In a practical remote data auditing protocol for cloud-assisted WBANs with a pay-as-you-go business model, the following objectives are required.

    1) Correctness: It is possible to generate valid proofs if and only if the CSP possesses the original file and timestamp.

    2) Verifiability: The TPA can check the file integrity using partial data blocks without accessing the original file.

    3) Periodic auditing: The TPA can audit data integrity spontaneously according to the timestamp and auditing period.

    4) Accountability: There is no dispute that the CSP is the responsible party if any error is detected.

    5) User friendly: The storage space occupied by data block tags should be smaller than the data block itself. The data owner pays as few storage fees as possible for secure storage without hindering data integrity auditing.

    Definition 1 (Bilinear map) Given three prime order groups G1,G2,GT with q elements, and for all gG1 and hG2, e:G1×G2GT is a bilinear map if the following properties hold:

    1) Bilinearity: The equation e(ga,hb)=e(g,h)ab holds if a,b are randomly choosed in Zq.

    2) Non-degeneracy: e(g,h)1 for some g,h.

    3) Computability: There exists an efficient algorithm to compute e(g,h).

    Definition 2 (Collusion attack algorithm with k traitors (k-CAA) problem assumption) For an integer xZq, given {gG1,hxG2,h1,,hkZq,g1x+h1,,g1x+hk}, to compute g1x+h when h{h1,,hk}.

    There is no algorithm to solve the k-CAA problem with a non-negligible advantage in probabilistic polynomial-time.

    Definition 3 (Modified k-CAA problem assumption) For three integers x,a,bZq, given {g,gaG1,hx,hbG2,h1,,hkZq,gabx+h1,,gabx+hk}, to compute gabx+h when h{h1,,hk} or gab.

    There is no algorithm to solve the modified k-CAA problem with a non-negligible advantage in probabilistic polynomial-time.

    We define a certificate-based remote data auditing protocol (CB-RDAP), which consists of eight polynomial-time algorithms.

    1) Setup: The CSP takes input a security parameter 1λ and outputs (pp,msk), where the parameter pp is published in system and the master private key msk is only known by the CSP.

    2) UserKeyGen: The user takes input (pp,ID) and outputs a user’s public/private key pair (upkID,uskID), where ID denotes a user’s identity.

    3) Certify: The CSP takes input (pp,msk,ID,upkID) and outputs the corresponding certificate CertID.

    4) TagGen: The user takes input (pp,ID,uskID,CertID,fname,t,F) and outputs a verifiable label τ and a file’s signatures {σi}, where fname denotes the filename, F=(m1,,mm) denotes a file including m data blocks, and t denotes the timestamp.

    5) Confirm: The CSP takes input (pp,msk,ID,τ) and outputs the auditing period T and a voucher π if the file keeps intact, outputs “failure” otherwise.

    6) Challenge: The TPA takes input (pp,ID,τ,T) and outputs a challenge chal periodically according to T.

    7) ProofGen: The CSP takes input (pp,ID,τ,{σi},chal) and outputs a possession proof PF.

    8) ProofCheck: The TPA takes input (pp,ID,upkID,τ,chal,PF) and outputs 1 if PF passes the verification, otherwise outputs 0.

    Correctness: For any

    {(pp,msk)Setup(1λ)(uskID,upkID)UserKeyGen(pp,ID)CertIDCertify(pp,msk,ID,upkID)

    if (τ,{σi})TagGen(pp,ID,uskID,CertID,fname,t,{mi}), chalChallenge(pp,ID,τ,T) and PF ProofGen(pp,ID,τ,{σi},chal), then 1ProofCheck (pp,ID,upkID,τ,chal,PF).

    A CB-RDAP is secure if it meets the following requirements:

    1) If the challenged file stored in CSP is intact and PF is generated by ProofGen(pp,ID,τ,{σi},chal) honestly, the probability of ProofCheck(pp,ID,upkID,τ,chal,PF)=1 is 1.

    2) If the challenged file is damaged or deleted, the probability of the CSP could forge a valid proof PF is negligible.

    3) The CSP cannot deny it has received a file from the data owner successfully if the voucher π is generated by the algorithm Confirm(pp,msk,ID,τ).

    To ensure the correctness and integrity of the data, a secure CB-RDAP should resist the following attacks: 1) The third party (CSP or system users) can forge a tag of data block. 2) The CSP can replace the new challenge response with the expired valid proof to deceive the data owner. 3) The CSP can generate a valid proof PF by using non-challenging data blocks and tags.

    In a secure CB-RDAP, three types of adversaries that can be involved to cover these attacks. The Type I adversary models system user’s ability to forge data block tag. It can change the public key for some users, and the target user’s certificate keeps secret from the adversary. The Type II adversary who plays the CSP has the ability to forge data block tags. It holds the master secret key and does not have permission to substitute the target user’s public key. The Type III adversary models the ability of CSP to forge a valid proof, it attempts to generate a valid proof when some data blocks are damaged. Generally, the following oracles are provided for adversaries.

    • User-key-gen Oracle. The adversary sends a user’s identity ID to the oracle. The oracle generates the user’s public/private key (upkID,uskID) by running the algorithm UserKeyGen and returns this public/private key to the adversary.

    • Corruption Oracle. On input a user’s identity ID, the oracle outputs the corresponding private key uskID if ID has been queried a User-key-gen oracle. Otherwise nothing is returned.

    • Certification Oracle. The adversary sends a user’s identity ID and its public key upkID to the oracle. The oracle will run the algorithm Certify to obtain CertID and return it to the adversary.

    • Key-replace Oracle. The adversary provides a user identity ID and a novel public/private key pair (upkID,uskID) to the oracle. The oracle records the newly public/private key pair.

    • TagGen Oracle. It inputs a user’s identity ID, the filename fname with the timestamp t and data block mF. The oracle outputs the data block’s tag σ.

    • ProofCheck Oracle. The adversary generates a proof PF, and returns PF and the corresponding challenge chal to the oracle. The oracle outputs 0 or 1.

    We define the security model of CB-RDAP by the game (Game 1, Game 2, Game 3) between the adversary A(AI,AII,AIII) and the challenger C. Besides, we claim that the advantage of A wins the game is the probability of A breaks the scheme.

    1) Game 1 (Type I adversary AI):

    Initialization: Taking a security parameter 1λ as input, the challenger returns the public parameter pp to AI by running Setup(1λ) and keeps system master private key msk secret.

    Query: AI is allowed to make the query to User-key-gen oracle, Corruption oracle, Certification oracle, Key-replace oracle, and TagGen oracle adaptively.

    Forge: AI outputs (ID,upkID,τ,mF,σ). AI wins the game if the following conditions hold,

    a) AI has never queried the certificate of ID.

    b) ID has never been launched Corruption query by the adversary AI.

    c) (ID,fname,m) has never been asked the tag by the adversary AI.

    d) ProofCheck(pp,ID,upkID,τ,,m,σ)=1.

    2) Game 2 (Type II adversary AII):

    Initialization: Taking a security parameter 1λ as input, the challenger returns pp and msk to AII by running Setup(1λ).

    Query: AII can adaptively make the User-key-gen oracle, Corruption oracle, and TagGen oracle.

    Forge: AII outputs (ID,upkID,τ,mF,σ). AII wins the game if the following conditions hold,

    a) ID has never been launched Corruption query by the adversary AII.

    b) (ID,fname,m) has never been asked the tag by the adversary AII.

    c) ProofCheck(pp,ID,upkID,τ,,m,σ)=1.

    3) Game 3 (Type III adversary AIII):

    Initialization: Taking a security parameter 1λ as input, the challenger returns the public parameter pp to AIII.

    Query: AIII can adaptively make TagGen oracle, and ProofCheck oracle.

    Challenge: C generates a challenge chal and sends it to AIII. On receiving chal, AIII computes a proof PF and sends it to C.

    Forge: AIII outputs (ID,upkID,τ,chal,PF). AIII wins the game if the following coditions hold,

    a) ProofCheck(ID,upk,τ,chal,PF)=1, where PF=(˜mF,˜σ).

    b) There is at least a challenged data block has never been submitted to the TagGen query.

    Definition 4 A CB-RDAP is secure if the advantage of the probabilistic polynomial time adversaries AI,AII,AIII win Game 1, Game 2, Game 3 is negligible, respectively.

    The construction of linear homomorphic verifiable tags in our protocol is inspired by the work of Ateniese et al. [5] and Shacham et al. [18]. Our goal is to reduce the signature generation cost and the size of the signature over the file. We describe algorithms of our protocol as follows and present the workflow in Fig.3.

    Figure  3.  Workflow of the remote data auditing protocol.

    1) Setup: The CSP takes input a security parameter 1λ, this algorithm selects three cyclic groups G1,G2,GT with prime order q (the size of q is determined by λ), a generator gG1, a generator hG2, and a bilinear map e:G1×G2GT. The CSP then chooses four collision-resistant hash functions H1:{0,1}G1, H2:{0,1}Zq, H3:{0,1}G1, H4:{0,1}Zq. The CSP sets the system master private key msk=s and the system public key mpk=hs, where s is randomly selected in Zq. The CSP outputs the system public parameters pp=(q,G1,G2,GT,e,g,h,mpk,H1,H2,H3,H4) and stores the system master private key msk=s secretly.

    2) UserKeyGen: The user takes input parameters pp, this algorithm sets his/her private key uskID=x and his/her public key upkID=hx, where xZq is randomly selected in Zq. The user with an identity ID requests a certificate to the CSP by providing the public key upkID.

    3) Certify: It takes the public parameters pp, system master private key msk and a user identity ID as input, the CSP generates a certificate CertID=H1(ID,upkID)s for user with identity ID after checking the authenticity of the user identity. Then, the CSP sends a certificate CertID to the user.

    If e(CertID,h)=e(H1(ID,upkID),mpk) holds, the certificate is valid.

    4) TagGen: Given an encrypted file F{0,1} with the filename fname and the timestamp t, the data owner first splits F into m blocks m1,,mm (each block includes n block sectors) and computes the file label τ=fname||t||m||siguskID(fname,t,m). The data owner then computes the tag of mi,1im with σi=(βiCertIDnj=1αjmij)1uskID+ˆu, where βi=H3(ID,i,fname),αj=H4(ID,j,upkID), ˆu=H2(ID,fname,t,upkID), and sends (τ,m1,σ1,,mm,σm) to the CSP.

    5) Confirm: On receiving a file from the data owner, the CSP will perform verification on it. The CSP returns “failure” if the file is damaged. Otherwise, the CSP sets the latest audit time ˆt=t, generates a voucher π=sigmsk(ID,fname,t,ˆt,T) and returns π to the data owner.

    Note that, the auditing period T is chosen by the data owner according to the actual demand, and the latest audit time ˆt=ˆt+T and the voucher π will be updated after receiving the storage fee from the data owner.

    Upon receiving the voucher π from the CSP, the data owner stores the file label τ, the voucher π and the auditing period T in local. Meanwhile, the data owner provides (ID,upkID,τ,T) to TPA for audit periodically.

    6) Challenge: The TPA runs this algorithm. For each period, it chooses a subset I[1,m] randomly and selects ciZq for every iI. At the end of each auditing period T, the TPA launches a challenge chal={ID,fname,t,(i,ci):iI} to the CSP.

    7) ProofGen: Upon receiving a challenge chal, the CSP computes and returns the proof PF={˜m,˜σ} to the TPA, where ˜m=iIcimi=(˜m1,,˜mn),˜σ=iIσici.

    8) ProofCheck: On receiving a response PF, the TPA computes βi=H3(ID,i,fname),αj=H4(ID,j,upkID),ˆu=H2(ID,fname,t,upkID) and then checks the proof by the following equation

    e(˜σ,upkIDhˆu)=e(iIβici,h)e(H1(ID,upkID)nj=1αj˜mj,mpk) (1)

    If the above equation holds, the challenged file is intact. The data owner pays the storage fee to CSP. Otherwise, the data owner claims for compensation.

    Assume all the entities faithfully follows the protocol, we can check the correctness of the verification equation.

    e(˜σ,upkIDhˆu)=e(iIσici,hx+ˆu)=e((iIβiciH1(ID,upkID)snj=1αj˜mj)1x+ˆu,hx+ˆu)=e(iIβiciH1(ID,upkID)snj=1αj˜mj,h)=e(iIβici,h)e(H1(ID,upkID)nj=1αj˜mj,mpk) (2)

    We prove the proposed protocol is secure under adaptive chosen identity attacks and adaptive chosen file attacks in random oracle model. The security proof is conducted as follows: 1) The single tag of the data block is unforgeable. 2) The proof PF is unforgeable.

    Theorem 1 If the advantage of forging the single tag by AI is at most ϵ with the running time t, the advantage of the challenger to solve the modified k-CAA problem is ϵ(11qu)qr+qe(11qt+1)qt1(qt+1)quϵ, where qu,qr,qe,qt are the times of User-key-gen query, Corruption query, Certification query and TagGen query, respectively.

    Proof Appendix A shows the detailed proof.

    Theorem 2 If the advantage of a Type II adversary forges the single tag is at most ϵ with the running time t, the advantage of the challenger to solve the k-CAA problem is ϵ(11qu)qr(11qt+1)qt1(qt+1)quϵ, and qu,qr,qt are the times of User-key-gen query, Corruption query and TagGen query, respectively.

    Proof Appendix B shows the detailed proof.

    Theorem 3 If the challenged file is damaged or deleted, the CSP cannot forge the valid proof with a non-negligible probability.

    Proof Appendix C shows the detailed proof.

    If the data owner uploads all data blocks and file’s related information successfully, the CSP will provide some audit periods (such as a week, a month, etc.) for data owner to choose. The data owner selects appropriate auditing period T according to actual needs.

    The data owner with the identity ID must provide public key upkID, the file name fname and the number of data blocks m to the TPA for public auditing. In our protocol, the data owner provides some additional information including the latest auditing time ˆt=t and the auditing period T to TPA. The TPA can compute the latest audit time according to ˆt=ˆt+T, and then realize periodic auditing. The TPA can issue a valid challenge according to fname and m at the latest audit time. In summary, the TPA can audit the file and timestamp stored in CSP periodically.

    The data owner and the CSP are the two entities that can be responsible for files. Once the file is detected to be damaged, the error may be happened during data upload phase or storage phase. To reduce the controversy, we add the confirm function in our protocol. The CSP will generate and return a voucher π to data owner if the file’s related information (fname,t,m) and all data blocks are verified to be correct. This voucher can be used to prove that the file has been uploaded successfully. After that, any data error being detected must be the storage error.

    In a remote data auditing system, the CSP stores all data blocks’ tags to ensure data integrity. The data owner has to pay storage fee for storage volumes of tags. In our protocol, the size of each data block m is (n×q)-bits (each sector is an element of Zq and n is the number of data block’s sector) and the size of its corresponding tag σ is about q-bits (i.e., an element of G1). The data owner only needs to pay the fixed storage fee. We can find that the bigger of the data block the lower of storage fee. However, n should be set a reasonable value since the CSP is required to return a proof PF={˜m,˜σ} in auditing phase. If the data block’s size is too large, it will bring high communication cost. Therefore, to reduce the storage cost, n should be as large as possible without obstructing the data integrity auditing.

    We summarize the efficiency and functionality of our protocol in terms of computation cost, communication cost, storage cost and detection rate. Moreover, we also show a comparison among our protocol, Wang et al.’s protocol [7], and Wu et al.’s PDP protocol [14]. For simplicity, in this section we assume the data owner stores a file with m data blocks, and each block contains n-sectors. The TPA checks the integrity of file by challenging c different data blocks. Moreover, we denote the notations in Table 1.

    Table  1.  Notations
    Notions Descriptions
    TH A map-to-point hash computation cost
    TP A bilinear pairing computation cost
    TE1,TE2,TET An exponential cost in G1,G2,GT, respectively
    TM1,TM2,TMT A multiplicative cost in G1,G2,GT, respectively
    |G1|,|G2|,|GT| The binary length of an element in G1,G2,GT
    |Zq| The binary length of an element in Zq
    |sig| The binary length of the cited signature scheme
     | Show Table
    DownLoad: CSV

    1) Computation cost: In Table 2, we list the computation cost of our protocol and the other two protocols. The comparison results show that the computation cost of our protocol is independent with the number of data block sectors n. Moreover, our protocol has lower the computation cost.

    Table  2.  Comparison of computation cost
    Ref. TagGen ProofGen ProofCheck
    [7] mn(2TE1+TH+TM1) (c1)TM1+cTE1+nTET (n+c+1)TE1+cTH+nTM1+nTMT+2TP
    [14] mn(2TE1+TH+TM1) (c+2)TE1+(c1)TM1+2TM2 (c+n)TE1+(c+2)TM1+2TE2+2TM2+5TP
    Ours m(2TE1+TH+TM1) cTE1+(c1)TM1 (c+1)TE1+TH+(c1)TM1+TE2+TM2+3TP
     | Show Table
    DownLoad: CSV

    2) Communication cost: From the Table 3, we can see that the data owner uploads mn|Zq|+m|G1|+|sig| bits to the CSP in our protocol, which is fewer n|G1| bits than the comparison protocols. In the auditing phase, our protocol costs (c+n)|Zq|+|G1| bits to transfer information, it is lower than Wang et al.’s protocol [7], and acceptable compared to Wu et al.’s PDP protocol [14].

    Table  3.  Comparison of communication cost
    Ref. Outsourcing storage Auditing
    [7] mn|Zq|+m|G1|+|sig| (c+n)|Zq|+|G1|+n|GT|
    [14] mn|Zq|+(m+n)|G1|+|sig| (c+4)|Zq|+3|G1|+2
    Ours mn|Zq|+m|G1|+|sig| (c+n)|Zq|+|G1|
     | Show Table
    DownLoad: CSV

    3) Storage cost: We only consider the storage cost at CSP side. The CSP will store all verification information which includes all data blocks and data blocks’ tags. The CSP costs mn|Zq|+(m+n)|G1|+|sig| to store verification information in Wang et al.’s protocol [7]. While in [14], the CSP will cost about (mn+1)|Zq|+(m+n)|G1|+|sig| to store verification information. The CSP only needs to cost mn|Zq|+m|G1|+|sig| to store verification information in our construction. Obviously, the storage cost in our protocol is less than the other two protocols.

    4) Detection rate analysis: Suppose m is the number of the total data blocks, and xm is the number of the corrupted data blocks, then k=x/m is the file corruption rate. If a file is corrupted, we use Px to denote the probability of being successfully detected. Let c denote the number of the corrupted data blocks that are chosen in challenge phase. We can see that Px=1 if c>mx, and Px=P{c1}=1P{c<1} if cmx.

    Since P\{ {c' < 1}\} = \frac{\binom{m-x}{c}}{\binom{m}{c}} =\frac{(m-x)\ldots(m-x-(c-1))}{m \ldots (m-(c-1))}, and \frac{m-x-(c-1)}{m-(c-1)}\leq \frac{m-x-i}{m-i}\leq\frac{m-x}{m}, P_x follows:

    1-\left(1-\dfrac{x}{m}\right)^c \leq P_x \leq 1-\left(1-\dfrac{x}{m-c+1}\right)^c (3)

    From (3), we have 1-P_x\leq (1-k)^c . Therefore, to make P_x \geq 0.99 , if the file corruption rate k is 3 \% , the TPA needs to choose 151 data blocks randomly; If k=5\% , the TPA needs to choose 90 data blocks randomly.

    We evaluate experiments on a laptop (4GB RAM, Intel i5 3.2 GHz quad-core processor), and employ the standard paring f.param of JPBC [35] to run these protocols. In standard paring f.param (80-bit security level), the size of elments in \mathbb{G}_1 is 160 bits and in \mathbb{G}_2 is 320 bits. We choose a 1 MB (1048576 bytes) file to test the performance of our protocol, and the size of data sector is 160 bits. Therefore, m and n should satisfy \frac{160(m-1)n}{8} \leq 1048576 \leq\frac{160mn}{8} .

    1) Tag generation cost: Let n vary between 1 and 100. We measure the computation cost of our protocol, Wang et al.’s protocol [7], and Wu et al.’s protocol [14] in the tag generation phase. We employ the BLS [19] to implement all the cited signature schemes in the three protocols.

    As shown in Fig.4, the experimental results show that the tag generation cost is nearly constant for signing a file in [7] and [14], and the time cost will reduce as the growth of n in our protocol. It only requires 2.17 s to sign a 1 MB file with 525 data blocks and 100 sectors. Besides, the computation cost almost unchanged as the growth of n in the comparison protocol. Moreover, we compare the tag generation cost for different file sizes in Fig.5, and our protocol is efficient.

    Figure  4.  Compuation cost for tag generation.
    Figure  5.  Tag generation cost for different files.

    2) Proof generation cost: We test the proof generation cost by playing the role of the CSP. In this experiment, we split the file into 2622 data blocks (the sector size is fixed at 20), and choose c at the range of 20 to 160. In Fig.6, the proof generation cost is lower when c=160 than c=20 . Furthermore, the time cost increases linearly in the above three protocols as c grows. The difference of the three protocols is tiny. The proof generation cost is about 0.3 s for 160 challenged data blocks in our protocol.

    Figure  6.  Proof generation cost.

    3) ProofCheck cost: The TPA runs the algorithm ProofCheck to verify the validity of the proof. Let c vary between 20 and 160. Fig.7 illustrates that our protocol takes less cost to detect proofs than the other two protocols. Besides, the proof verification cost increases linearly with c in all three protocols. The proof verification cost is about 0.55 s for 160 challenged data blocks in our protocol.

    Figure  7.  Proof verification cost.

    4) Detection rate: Considering the number of the challenged data block from 20 to 160, we give the probability of successfully detecting whether the file is contaminated. We test the file with 3 \% , 5 \% , 10 \% , and 15 \% corruption rates, respectively. From the Fig.8, we can observe that the more data blocks are selected, the higher probability of detecting the damaged files. If the file corruption rate is 3 \% , Fig.8 shows 140 data blocks are required to get a detection rate higher than 0.99, while 100 data blocks are required if the detection rate is not less than 0.97. If the file corruption rate is 15 \% (i.e., 67 data blocks are corrupted), Fig.8 shows 40 data blocks are required to make the detection rate close to 1, while 20 data blocks if the detection rate is not less than 0.96. The results show no difference from the theoretical analysis.

    Figure  8.  The probability of detecting successfully.

    This paper presents a protocol for auditing data remotely, which is suitable for pay-as-you-go business model. In our construction, the TPA can check whether the data is stored correctly according to the auditing period. The data owner pays storage fee according to the pay-as-you-go business model if the file keeps intact. Once a file is detected error or lost, the data owner will stop to pay storage fee and require the CSP to compensate for the damaged file. We prove the correctness and security of our protocol in random oracle. We then test the computational cost from theoretical and experimental aspects, respectively. The experiment results illustrate that the proposed protocol in this paper is more practical than other two protocols.

    Theorem 1 proves that our protocol is secure against the Type I adversary which can change the public key and keep the target user’s certificate secret.

    Proof Suppose pp=(q, \mathbb{G}_1, \mathbb{G}_2, \mathbb{G}_T,g,h, e) are public parameters, and there exists an algorithm {\cal{B}} to settle the modified k -CAA problem in polynomial-time.

    For x, a, b\in \mathbb{Z}_q^* , given a modified k -CAA problem instance \{h_1,\dots,h_k\in \mathbb{Z}_q^*,\, g,g^b\in \mathbb{G}_1,\,h^x,h^a\in \mathbb{G}_2,g^{\frac{ab}{x+h_1}}, \ldots,g^{\frac{ab}{x+h_k}}\}, {\cal{B}} can compute g^{\frac{ab}{x+h^{*}}} when {h}^{*} \notin \left\{ {{h}_{1},\ldots,{h}_{k}}\right\} or g^{ab} .

    Initialization (Phase 1): {\cal{B}} lets pk=h^a , chooses a hash function H_4: \left\{ 0, 1 \right\}^{*}\times \mathbb{Z}_q \to \mathbb{Z}_q , and sets three hash functions H_1, H_2,H_3 as random oracle. Let \psi: \mathbb{G}_2 \rightarrow \mathbb{G}_1 denote one-way mapping and sig is a secure signature scheme. pp=(q, \mathbb{G}_1, \mathbb{G}_2, \mathbb{G}_T,g,h,e) are known to both {\cal{B}} and {\cal{A}}_{\rm{I}}.

    Oracle simulation (Phase 2): {{\cal{A}}}_{{\rm{I}}} issues oracles adaptively.

    • User-key-gen query. Suppose the i -th identity is marked as ID_i , and {\cal{B}} chooses ID_{I} from \left\{ID_1,\ldots,ID_{q_u}\right\} as the challenge identity. {\cal{B}} stores \left({ID}_{i},{usk}_{i},{upk}_{i}\right) list that is called L_u and initially empty. {\cal{B}} will check the list L_u when receiving a query from {\cal{A}}_{\rm{I}}. If {\cal{B}} finds ID_i in the L_u list, then sends (usk_i,upk_i) to {\cal{A}}_{\rm{I}}. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} chooses {x}_{i} \in { \mathbb{Z}_{q}}^{*} at random, sets {upk}_{i}={h}^{{x}_{i}} and {usk}_{i}={x}_{i} . In this case, {\cal{B}} returns \left({usk}_{i},{upk}_{i}\right) to {\cal{A}}_{{\rm{I}}} and inserts ({ID}_{i},{usk}_{i},{upk}_{i}) into L_u .

    2) If i =I , {\cal{B}} arranges {upk}_{i}={h}^{x} , where the user private key is unknown to {\cal{B}} . In this case, {\cal{B}} adds ({ID}_{i},\triangle,{upk}_{i}) into the list {L}_{u} and returns {upk}_{i} to adversary {{\cal{A}}}_{{\rm{I}}}.

    H_1 query. {\cal{B}} stores \left({ID}_{i}, H_{1i}, {d}_{i}\right) list that is called L_{H1} and initially empty. If {\cal{B}} finds ID_i in the L_{H1} list, then sends H_{1i} to {\cal{A}}_{\rm{I}}. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} chooses {d}_{i}\in{\mathbb{Z}_{q}}^{*} at random, computes {H}_{1}\left({ID}_{i}, {upk}_{i}\right)={g}^{{d}_{i}} , sends {H}_{1i} to {{\cal{A}}}_{{\rm{I}}} and inserts \left({ID}_{i},{H}_{1i},{d}_{i}\right) into L_{H1} .

    2) If i = I , {\cal{B}} arranges {H}_{1}\left({ID}_{i}, {upk}_{i}\right)={g}^{b} , returns {H}_{1i} to adversary {{\cal{A}}}_{{\rm{{\rm{I}}}}} and inserts \left({ID}_{i},{H}_{1i},\perp\right) into L_{H1} .

    • Corruption query. When receiving a corruption query from {\cal{A}}_{\rm{I}}, {\cal{B}} terminates the simulation and outputs \perp if ID_i=ID_{\rm{I}} or ID_i is not in L_u , Otherwise, {\cal{B}} finds x_i from L_u and returns it to {{\cal{A}}}_I .

    • Certification query. {\cal{B}} holds \left(ID_i,Cert_{ID_i}\right) list that called L_c and initially empty. If {\cal{B}} finds ID_i in {L}_{c} , then sends Cert_{ID_i} to {{\cal{A}}}_{{\rm{I}}}. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} calculates {Cert}_{{ID}_{i}}={H}_{1}({ID}_{i}, {upk}_{i})^{a}= ({\psi(h)^{d_i}})^a={\psi(h^a)}^{d_i}, where {d}_{i} is extracted from {L}_{H1} list. The {\cal{B}} returns {Cert}_{{ID}_{i}} to {{\cal{A}}}_{{\rm{I}}} and adds \left({ID}_{i},{Cert}_{{ID}_{i}}\right) into {L}_{c} list.

    2) If i = I , {\cal{B}} outputs \perp .

    • Key-replace Query. {\cal{B}} checks if g^{usk^{'}}=upk^{'} hold when receiving a novel ({usk}^{'},{upk}^{'}) on {ID}_{i} . If the equation holds, then {\cal{B}} inserts \left({ID}_{i}, usk^{'}, {upk}^{'}_{i}\right) into L_u .

    H_{2} query. {\cal{B}} stores ({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},{c}) list that is called L_{H2} and initially empty. If {\cal{B}} finds (ID_i,fname,t) in L_{H2} , then returns H_{2i} to the adversary. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} picks {H}_{2i}\in \mathbb{Z}_q^{*} randomly, and returns {H}_{2i} . Also, {\cal{B}} inserts ({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},\bot) into {L}_{H2} .

    2) If i = I , {\cal{B}} flips up a coin. {\cal{B}} uses c=1 to represent heads up with the probability \zeta , and uses c=0 to represent tails up with the probability 1-\zeta .

    a) If {c}=1 , {\cal{B}} selects {H}_{2i}\in \mathbb{Z}_{q}^{*} at random, and {H}_{2i} \notin \left\{{h}_{1},...,{h}_{k}\right\}. {\cal{B}} then sends {H}_{2}({ID}_{i},fname,t,{upk}_{i}) = {H}_{2i} to {{\cal{A}}}_{{\rm{I}}}, and adds \left({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},{c} \right) into {L}_{H2} list.

    b) If {c}=0 , {\cal{B}} chooses \hat{h}\in\left\{{h}_{1},\ldots,{h}_{k}\right\} that has never been selected, and sets {H}_{2}({ID}_{i},fname,t,{upk}_{i})=\hat{h} . Finally, {\cal{B}} sends \hat{h} to the adversary, and inserts \left({ID}_{i},fname,t,{upk}_{i}, \hat{h},{c} \right) into {L}_{H2} .

    H_3 query. {\cal{B}} stores \left({ID}_{i}, fname, {\beta}_{1},\ldots,{\beta}_{m}\right) that is called L_{H3} and initially empty. If {\cal{B}} finds (ID_i,fname) in L_{H3} , then sends {\beta}_{1},\ldots,{\beta}_{m} to {{\cal{A}}}_{{\rm{I}}}. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} randomly chooses {\beta}_{1},\ldots,{\beta}_{m}\in \mathbb{G}_{1} and sets {H}_{3j}\left({ID}_{i},j,fname\right)={\beta}_j,1\leq j \leq m . {\cal{B}} returns {\beta}_{1},\ldots,{\beta}_{m} to {{\cal{A}}}_{{\rm{I}}} and stores \left({ID}_{i}, fname, {\beta}_{1},\ldots,{\beta}_{m}\right) to the list {L}_{H3} .

    2) If i = I , {\cal{B}} first takes out H_{2i} from L_{H_2} list and randomly selects r_{j} \in \mathbb{Z}_q^{*},1\leq j \leq m . {\cal{B}} then computes \beta_{j}=(\psi(h^x) \cdot \psi(h)^{\hat{h}})^{r_{j}} and sets H_3(ID_i,j,fname)=\beta_{j} . {\cal{B}} returns {\beta}_{1},\ldots,{\beta}_{m} to {{\cal{A}}}_{{\rm{I}}} and stores \left({ID}_{i}, fname, {\beta}_{1},\ldots,{\beta}_{m}\right) to the list {L}_{H3} .

    • TagGen query. {{\cal{A}}}_{{\rm{I}}} selects {ID}_{i} , the filename fname , the file’s timestamp t and a data block \boldsymbol{m}_j \in F ,j\in [1,m] , user’s current private key {usk}_{i} (If the {upk}_{i} has not been replaced, then {usk}_{i}=\bot ). {\cal{B}} outputs \perp if {ID}_{i} has not been created.

    1) If i\ne I , {\cal{B}} first seeks usk_i from L_u , Cert_{ID_i} from L_c and H_{2i} from L_{H2} .

    a) {\cal{B}} computes sig_{usk_i}(fname,t,m) and sets the file label \tau=fname||t||m||sig_{usk_i}(fname,t,m) .

    b) {\cal{B}} computes the tag of the data block {\boldsymbol{m}}_{j} by the following equation

    {\sigma}_{j} ={{\left(\beta_{j} \cdot {Cert_{ID_i}}^{\sum_{k=1}^{n}{H_4(ID,k) \cdot m_{jk}}}\right)^{\frac{1}{x_i+H_{2i}}}}}

    c) {\cal{B}} responds label \tau and data block’s tag \sigma_j to {{\cal{A}}}_{{\rm{I}}}.

    2) If i=I , the user’s certificate {{Cert}_{ID}}_{t} of {ID}_{t} is unknown to {\cal{B}} . {\cal{B}} iterates over L_{H2} .

    a) {\cal{B}} aborts the simulation if {c}=1 .

    b) {\cal{B}} takes out {H}_{2t}=\hat{h}\in\left\{{h}_{1},\ldots,{h}_{k}\right\} from L_{H2} if {c}=0 and upk_i is original.

    c) {\cal{B}} calculates sig_{usk_i}(fname,t,m) and sets the file label \tau=fname||t||m||sig_{usk_i}(fname,t,m) .

    d) {\cal{B}} computes the tag of the data block {\boldsymbol{m}}_{j} by the following equation

    {\sigma}_{j} =\psi(h)^{r_j} \cdot ((g^{ab})^{\frac{1}{x+\hat{h}}})^{\sum_{k=1}^{n}{H_4(ID,j) \cdot m_{jk}}}

    e) {\cal{B}} responds label \tau and data block’s tag \sigma_j to {{\cal{A}}}_{{\rm{I}}}.

    Output (Phase 3): {{\cal{A}}}_{{\rm{I}}} outputs \left({ID}^{*},{upk}^{*}, \tau^*,\boldsymbol{m}^{*}, \sigma^{*} \right) . {{\cal{A}}}_{{\rm{I}}} wins the game if the following conditions are held:

    a) A corruption query on {ID}^{*} has never been launched.

    b) {{\cal{A}}}_{I} has never queried the certificate of {ID}^{*} .

    c) \left({ID}^{*},fname^*,\tau^*,\boldsymbol{m}^{*}\right) has never been asked the tag.

    d) Proof check \left(pp,ID^{*}, {upk}^{*}_{ID}, \tau^*,\perp, \{{\boldsymbol{m}^{*}, \sigma^{*}}\}\right) =1. {\cal{B}} knows {usk}^{*} even if {{\cal{A}}}_{{\rm{I}}} has replaced public key. We have:

    \begin{split} \sigma^{*}&=({H}_{3}(ID^{*},upk^{*})\cdot Cert_{ID^*}^{\sum_{k=1}^{n}{H_4(ID^*,k)\cdot m_{k}}})^{\frac{1}{{usk}^{*}+h^{*}}}\\& =(\psi(h)^{r^*(usk^{*}+h*)} \cdot g^{ab\sum_{k=1}^{n}{H_4(ID^*,k)\cdot m_{k}}})^{\frac{1}{{usk}^{*}+h^{*}}}\\& =\psi(h)^{r^*}\cdot (g^{ab\sum_{k=1}^{n}{H_4(ID^*,k)\cdot m_{k}}})^{\frac{1}{usk^{*}+h^{*}}}\\& \Rightarrow \left(\frac{\sigma^{*}}{\psi(h)^{r^*}}\right)^{(\sum_{k=1}^{n}{H_4(ID^*,k)\cdot m_{k}})^{-1}}\\& =(g^{ab})^{(x+h^*)^{-1}} \qquad (ID^*=ID_I) \\[-10pt] \end{split}

    where {h}^{*}={{H}_{2}}^{*}\left({ID}^{*}, {fname}^{*}, t,{upk}^{*} \right)=\hat{h} , usk^*= x, msk=a and {H}_{3}(ID^{*},upk^{*})=\psi(h)^{r^*(x+h^*)} .

    If {ID}^{*}\ne {ID}_{I} or {ID}^{*}={ID}_{I} but {c}^{*}=0 , {\cal{B}} aborts the game. Otherwise, if {c}^{*}=1 and {h}^{*} \notin \left\{{h}_{1},\ldots,{h}_{k}\right\} , {\cal{B}} can compute the modified k -CAA problem.

    1) If {\cal{A}}_{\rm{I}} has never replaced the public key {upk}^{*} , {\cal{B}} computes g^{\frac{ab}{{x}+{h}^{*}}} as the solution of the modified k -CAA problem.

    2) If {\cal{A}}_{\rm{I}} has replaced the public key {upk}^{*} , {\cal{B}} computes {g}^{ab} =\left(g^{\frac{ab}{{usk}^{*}+{h}^{*}}}\right)^{{\left({usk}^{*}+{h}^{*}\right)}} as the solution of the modified k -CAA problem.

    Probability analysis: Suppose {\cal{B}} can get the solution of the modified k -CAA problem, the following conditions must be held:

    a) E_1 : The simulation has never been aborted in the Corruption query phase, Certification query phase, and TagGen query phase. And the probability is {\rm{Pr}}[E_1]= (1-\frac{1}{q_u})^{q_r+q_e}(1-\frac{1}{q_u}\zeta)^{q_t}\geq (1- \frac{1}{q_u})^{q_r+q_e}(1-\zeta)^{q_t} .

    b) E_2 : {\cal{A}}_{\rm{I}} forges a valid signature. The probability is {\rm{Pr}}[E_2|E_1]=\epsilon .

    c) E_3 : ID^{*}=ID_{\rm{I}} and c^{*}=1 . The probability is {\rm{Pr}}[E_3|E_1 \wedge E_2]=\frac{\zeta}{q_u} .

    The probability that {\cal{B}} succeeds is \epsilon'={\rm{Pr}}[E_1\wedge E_2 \wedge E_3]= {\rm{Pr}}[E_1] \cdot {\rm{Pr}}[E_2|E_1]\cdot {\rm{Pr}}[E_3|E_1\wedge E_2]\geq (1-\frac{1}{q_u})^{q_r+q_e}(1-\zeta)^{q_t}\frac{\zeta}{q_u}\epsilon. Since \zeta=\frac{1}{q_t+1} , (1-\zeta)^{q_t}\zeta can take the maximum value, we have

    \epsilon' \geq\left(1-\frac{1}{q_u}\right)^{q_r+q_e}\left(1-\frac{1}{q_t+1}\right)^{q_t}\frac{1}{(q_t+1)q_u}\epsilon

    Theorem 2 proves that our protocol is secure against the Type II adversary which has the master key but cannot change the public key.

    Proof Suppose pp=(q, \mathbb{G}_1, \mathbb{G}_2, \mathbb{G}_T,g,h, e) are public parameters, and there exists an algorithm {\cal{B}} to settle the k -CAA problem in polynomial-time.

    For x\in \mathbb{Z}_q^* , given a k -CAA problem instance \{h_1,\dots,h_k\in \mathbb{Z}_q^*, g\in \mathbb{G}_1,h^x\in \mathbb{G}_2,g^{\frac{1}{x+h_1}},\ldots,g^{\frac{1}{x+h_k}}\}, {\cal{B}} can compute g^{\frac{1}{x+h^{*}}} when {h}^{*} \notin \left\{ {{h}_{1},\ldots,{h}_{k}}\right\} .

    Initialization (Phase 1): {\cal{B}} selects s\in { \mathbb{Z}_q}^{*} at random and computes pk=h^s , chooses a hash function H_4: \left\{ 0, 1 \right\}^{*}\times \mathbb{Z}_q \to \mathbb{Z}_q , and sets three hash functions H_1, H_2,H_3 as random oracle. Let \psi: \mathbb{G}_2 \rightarrow \mathbb{G}_1 denote one-way mapping and sig is a secure and efficient signature scheme. Both {\cal{B}} and {\cal{A}}_{{\rm{II}}} hold pp=(q, \mathbb{G}_1, \mathbb{G}_2,\mathbb{G}_T, g,h,e,pk,H_4).

    Oracle simulation (Phase 2): {{\cal{A}}}_{{\rm{II}}} issues oracles adaptively.

    User-key-gen query. Suppose the i -th identity is marked as ID_i , {\cal{B}} chooses ID_I from \left\{ID_1,\ldots,ID_{q_u}\right\} as the challenge identity. {\cal{B}} stores \left({ID}_{i},{usk}_{i},{upk}_{i}\right) list that is called L_u and initially empty. {\cal{B}} will check the list L_u when receiving a query from {\cal{A}}_{{\rm{II}}}. If {\cal{B}} finds ID_i in L_u , then responses (usk_i,upk_i) . Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} selects {x}_{i}\in {\mathbb{Z}_{q}}^{*} at random, sets {upk}_{i}={h}^{{x}_{i}} and {usk}_{i}={x}_{i} . In this case, {\cal{B}} returns \left({usk}_{i},{upk}_{i}\right) to {{\cal{A}}}_{{\rm{II}}} and inserts \left({ID}_{i},{usk}_{i},{upk}_{i}\right) into L_u .

    2) If i = I , {\cal{B}} arranges {upk}_{i}={h}^{x} , and {\cal{B}} doesn’t know the user’s private key. In this case, {\cal{B}} adds \left({ID}_{i},\triangle,{upk}_{i}\right) into {L}_{u} and returns {upk}_{i} to {{\cal{A}}}_{{\rm{II}}}.

    H_1 query. {\cal{B}} stores \left({ID}_{i}, H_{1i}, {d}_{i}\right) list that is called L_{H1} and initially empty. If {\cal{B}} finds ID_i in L_{H1} , then sends H_{1i} to {\cal{A}}_{{\rm{II}}}. Otherwise, {\cal{B}} selects d_i\in \mathbb{Z}_{q}^{*} randomly and does as follows:

    1) If i \ne I , {\cal{B}} computes {H}_{1}\left({ID}_{i}, {upk}_{i}\right)=g^{d_i} , sends {H}_{1i} to the adversary and inserts (ID_i,H_{1i},d_i) into L_{H1} .

    2) If i =I , {\cal{B}} computes {H}_{1}\left({ID}_{i}, {upk}_{i}\right)=g^{d_i/s} , sends {H}_{1i} to the adversary and inserts (ID_i,H_{1i},d_i) into L_{H1} .

    • Corruption query. When receiving a corruption query from {\cal{A}}_{{\rm{II}}}, {\cal{B}} terminates the simulation and outputs \perp if ID_i=ID_I or ID_i is not in L_u , Otherwise, {\cal{B}} finds x_i from L_u and returns it to {{\cal{A}}}_{{\rm{II}}}.

    H_2 query. {\cal{B}} stores ({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},{c}) list that is called L_{H2} and initially empty. If {\cal{B}} finds (ID_i,fname,t) in L_{H2} , then returns H_{2i} to the adversary {\cal{A}}_{{\rm{II}}}. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} selects {H}_{2i}\in \mathbb{Z}_q^{*} randomly, and returns {H}_{2i} to the adversary. Also, {\cal{B}} inserts ({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},\bot) into {L}_{H2} .

    2) If i = I , {\cal{B}} flips up a coin. {\cal{B}} uses c=1 to represent heads up with the probability \zeta , and uses c=0 to represent tails up with the probability 1-\zeta .

    a) If {c}=1 , {\cal{B}} selects {H}_{2i} \in \mathbb{Z}_{q}^{*} at random, and {H}_{2i} \notin \left\{{h}_{1}, \ldots, {h}_{k}\right\}. {\cal{B}} then sends {H}_{2}({ID}_{i},fname,t,{upk}_{i})= {H}_{2i} to {{\cal{A}}}_{{\rm{II}}}, and adds \left({ID}_{i},fname,t,{upk}_{i}, {H}_{2i},{c} \right) into {L}_{H2} list.

    b) If {c}=0 , {\cal{B}} chooses \hat{h}\in\left\{{h}_{1},\ldots,{h}_{k}\right\} that has never been selected, and sets {H}_{2}({ID}_{i},fname,t,{upk}_{i})=\hat{h} . {\cal{B}} then returns \hat{h} to {{\cal{A}}}_{{\rm{II}}}, and inserts ({ID}_{i},fname,t, {upk}_{i}, \hat{h},{c} ) into {L}_{H2} .

    H_3 query. {\cal{B}} stores \left({ID}_{i},\, fname,\, {\beta}_{1},\ldots,\,{\beta}_{m}\right) list that is called L_{H3} and initially empty. If {\cal{B}} finds (ID_i,fname) in L_{H3} , then sends {\beta}_{1},\ldots,{\beta}_{m} to the adversary. Otherwise, {\cal{B}} does as follows:

    1) If i \ne I , {\cal{B}} selects {\beta}_{1},\ldots,{\beta}_{m}\in \mathbb{G}_{1} randomly and sets {H}_{3j}\left({ID}_{i},j,fname\right)={\beta}_{j},1\leq j \leq m . {\cal{B}} returns {\beta}_{1},\ldots,{\beta}_{m} to {{\cal{A}}}_{{\rm{II}}} and stores \left({ID}_{i}, fname, {\beta}_{1},\ldots,{\beta}_{m}\right) to the list {L}_{H3} .

    2) If i =I , {\cal{B}} first takes out H_{2i} from L_{H_2} list and randomly selects r_{j} \in \mathbb{Z}_q^{*},1 \leq j \leq m. {\cal{B}} then computes \beta_{j} = \psi(h^x)^{r_{j}}\cdot (\psi(h)^{H_{2i}})^{r_{j}} and sets H_3(ID_i,j,fname)= \beta_{j}. {\cal{B}} returns {\beta}_{1},\ldots,{\beta}_{m} to {{\cal{A}}}_{{\rm{II}}} and stores \left({ID}_{i}, fname, {\beta}_{1},\ldots,{\beta}_{m}\right) to the list {L}_{H3} .

    • TagGen query. {{\cal{A}}}_{{\rm{II}}} chooses a user identity {ID}_{i} , the filename fname , the file’s timestamp t and a data block \boldsymbol{m}_j \in F ,j\in [1,m] , user’s private key {usk}_{i} .

    1) If i\ne I , {\cal{B}} first seeks private key usk_i from L_u and the H_{2i} from L_{H2} .

    a) {\cal{B}} computes sig_{usk_i}(fname,t,m) and sets the file label \tau=fname||t||m||sig_{usk_i}(fname,t,m) .

    b) {\cal{B}} computes the tag of the data block {\boldsymbol{m}}_{j},1\leq j\leq m by the following equation

    \begin{array}{l} {\sigma}_{j} ={{\left(\beta_{j} \cdot {\psi(h^s)}^{d_i\sum_{k=1}^{n}{H_4(ID, k) \cdot m_{k}}}\right)^{\frac{1}{x_i+H_{2i}}}}} \end{array}

    c) {\cal{B}} responds label \tau and data block’s tag \sigma_j to {{\cal{A}}}_{{\rm{II}}}.

    2) If i=I , {\cal{B}} fist looks up L_{H2} .

    a) If {c}=1 , {\cal{B}} aborts the game.

    b) If {c}=0 , {\cal{B}} computes sig_{usk_i}(fname,t,m) and sets the file label \tau=fname||t||m||sig_{usk_i}(fname,t,m) .

    c) {\cal{B}} computes the tag of the data block {\boldsymbol{m}}_{j},1\leq j\leq m by the following equation

    \begin{array}{l} {\sigma}_{j} =\psi(h)^{r_{j}} \cdot (g^{\frac{1}{x+H_{2i}}})^{d_i\sum_{k=1}^{n}{H_4(ID,k) \cdot m_{k}}} \end{array}

    d) {\cal{B}} responds label \tau and data block’s tag \sigma_j to {{\cal{A}}}_{{\rm{II}}}.

    Output(Phase 3): {\cal{A}}_{{\rm{II}}} outputs \left({ID}^{*},{upk}^{*}, \tau^*, \boldsymbol{m}^*,\sigma^* \right) . {{\cal{A}}}_{{\rm{II}}} wins the game if the following conditions are held:

    a) A corruption query on {ID}^{*} has never been launched.

    b) \left({ID}^{*},fname^*,\tau^*,\boldsymbol{m}^{*}\right) has never been asked the tag.

    c) ProofCheck \left( ID^{*}, upk^{*},\tau^*, \perp, \{{\boldsymbol{m}^{*}, \sigma^{*}}\}\right) =1. That is,

    \begin{split} \sigma^{*} &=({H}_{3}(ID^{*},upk^{*})\cdot Cert_{ID^*}^{\sum_{k=1}^{n}{H_4(ID,k)\cdot m_{j}}})^{\frac{1}{{usk}^{*}+h^{*}}}\\& =(\psi(h)^{r^*(usk^{*}+h^*)} \cdot g^{d\sum_{k=1}^{n}{H_4(ID,k)\cdot m_{j}}})^{\frac{1}{{usk}^{*}+h^{*}}}\\& =\psi(h)^{r^*}\cdot (g^{\frac{1}{usk^{*}+h^{*}}})^{d\sum_{j=1}^{n}{H_4(ID,k)\cdot m_{k}}}\\& \Rightarrow \left(\frac{\sigma^{*}}{\psi(h)^{r^*}}\right)^{(d\sum_{k=1}^{n}{H_4(ID,k)\cdot m_{k}})^{-1}}\\& =g^{(x+h^*)^{-1}} \qquad (ID^*=ID_I)\\[-10pt] \end{split}

    where {h}^{*}={{H}_{2}}^{*}\left({ID}^{*}, {fname}^{*}, t^{*},{upk}^{*}\right)=\hat{h} and {H}_{3}(ID^{*},upk^{*})= \psi(h)^{r^*(x+h^*)}.

    If {ID}^{*}\ne {ID}_{I} or {ID}^{*}={ID}_{I} but {c}^{*}=0 , {\cal{B}} aborts the game. Otherwise, if {c}^{*}=1 and {h}^{*} \notin \left\{{h}_{1},\ldots,{h}_{k}\right\} , {\cal{B}} looks up {L}_{H2} , and compute g^{\frac{1}{x+{h}^{*}}}=(\frac{\sigma^{*}}{\psi(h)^{r^*}})^{(\sum_{k=1}^{n}{H_4(ID,k)\cdot m_{j}})^{-1}} as the solution of the k -CAA problem.

    Probability analysis: Suppose {\cal{B}} can get the solution of the k -CAA problem, the following conditions must be held:

    a) E_1 : The simulation has never been aborted in the Corruption query phase and TagGen query phase. The probability is {\rm{Pr}}[E_1]=(1-\frac{1}{q_u})^{q_r}(1-\frac{1}{q_u}\zeta)^{q_t}\geq (1- \frac{1}{q_u})^{q_r+q_e}(1-\zeta)^{q_t}.

    b) E_2 : {\cal{A}}_{{\rm{II}}} forges a valid signature. The probability is {\rm{Pr}}[E_2|E_1]=\epsilon .

    c) E_3 : ID^{*}=ID_I and c^{*}=1 . The probability is {\rm{Pr}}[E_3|E_1 \wedge E_2]=\frac{\zeta}{q_u} .

    In summary, the probability that {\cal{B}} succeeds is \epsilon' = {\rm{Pr}}[E_1 \wedge E_2 \wedge E_3] = {\rm{Pr}}[E_1]\cdot {\rm{Pr}}[E_2|E_1] \cdot {\rm{Pr}}[E_3|E_1\wedge E_2] \geq (1 - \frac{1}{q_u})^{q_r}(1 - \zeta)^{q_t}\frac{\zeta}{q_u}\epsilon. Since \zeta=\frac{1}{q_t+1} , (1-\zeta)^{q_t}\zeta can take the maximum value, we have

    \epsilon'\geq\left(1-\frac{1}{q_u}\right)^{q_r}\left(1-\frac{1}{q_t+1}\right)^{q_t}\frac{1}{(q_t+1)q_u}\epsilon

    Proof We suppose the adversary {\cal{A}}_{{\rm{III}}} can forge a valid proof successfully.

    Simulation: The system initialization and the oracle simulation are the same as in Game 1 or Game 2.

    ProofCheck. {\cal{A}}_{{\rm{III}}} generates the proof PF using some data blocks’s tags, and sends PF and challenge to {\cal{B}} . {\cal{B}} checks PF and sends the result to {\cal{A}}_{{\rm{III}}}.

    Challenge. chal = \{(i,c_i) : i \in I,I \subseteq \left[1,m\right], c_i\in \mathbb{Z}_{q}\} are chosen by {\cal{B}} as a challenge. There is at least a challenged data block that never been queried tag. {\cal{B}} then sends the challenge to {\cal{A}} .

    Forge: The adversary {\cal{A}} outs a valid proof \overline{PF} = \left\{\boldsymbol{m} = {\sum\limits_{ i\in I}}{{c}_{i}{\boldsymbol{m}}_{i}}, \bar{\sigma}=\prod\limits_{i \in I}{\bar{{\sigma }_{i}}^{{{c}_{i}}}}\right\} and sends it to {\cal{B}} .

    Probability analysis: Since the forged proof is valid, it satisfies

    \begin{split}& e(\bar{\sigma}, upk^*\cdot {h}^{h^*})/e\left(\displaystyle\prod_{i\in I}{\beta_i}^{c_i},h\right) \\& =e(H_1(ID^*,upk^*)^{\sum_{j=1}^{n}{H_4(ID^*,j)\cdot m_{j}}},pk) \end{split}

    Assume that the real proof for the challenge chal is PF = \left\{\mathbf{m}, {\sigma}\right\}, it can also make the equation holds.

    \begin{split}& e(\sigma, upk^*\cdot h^{h^*})/e\left(\displaystyle\prod_{i\in I}{\beta_i}^{c_i},h\right) \\& =e(H_1(ID^*,upk^*)^{\sum_{j=1}^{n}{H_4(ID^*,j)\cdot m_{j}}},pk) \end{split}

    Since the hash function is collision resistance, the adversary {\cal{A}} can get the only response when it issue a H_1 query on the same input. Similarly, H_2 query and H_3 query. Obviously, the above two equations are equal, i.e., \sigma=\bar{\sigma} , that is, \prod\limits_{i \in I}{{\sigma }_{i}^{{{c}_{i}}}}=\prod\limits_{i \in I}{\bar{{\sigma }_{i}}^{{{c}_{i}}}} . Because \sigma_i, \bar{\sigma_i} \in \mathbb{G}_1 , there exists x_i, y_i \in \mathbb{Z}_q^* satisfying \sigma_i=g^{x_i} and \bar{\sigma_i}=g^{y_i} . We get g^{\sum_{i \in I}c_ix_i}=g^{\sum_{i \in I}c_iy_i} , i.e., \sum_{i \in I}c_ix_i=\sum_{i \in I}c_iy_i , which means \sum_{i \in I}c_i(x_i-y_i)=0 . Since c_i \in \mathbb{Z}_q^* , we get x_i=y_i \, {\rm{mod}}\, q . This is contrary to the previous results. According to the Theorem 1 and Theorem 2, the probability of forging a single tag is negligible. Thus, the probability of forging a proof successfully is negligible if a file has been deleted or damaged.

    Theorem 3 is proved.

  • [1]
    B. Latré, B. Braem, I. Moerman, et al., “A survey on wireless body area networks,” Wireless Networks, vol.17, no.1, pp.1–18, 2011.
    [2]
    J. F. Wan, C. F. Zou, S. Ullah, et al., “Cloud-enabled wireless body area networks for pervasive healthcare,” IEEE Network, vol.27, no.5, pp.56–61, 2013. DOI: 10.1109/MNET.2013.6616116
    [3]
    S. Ullah, A. V. Vasilakos, H. Chao, et al., “Cloud-assisted wireless body area networks,” Information Sciences,, vol.284, pp.81–83, 2014.
    [4]
    Y. Deswarte, J. J. Quisquater, and A. Saïdane, “Remote integrity checking,” in Proceedings of the Sixth Working Conference on Integrity and Internal Control in Information Systems, Lausanne, Switzerland, pp.1–11, 2004.
    [5]
    G. Ateniese, R. Burns, R. Curtmola, et al., “Provable data possession at untrusted stores,” in Proceedings of ACM Conference on Computer and Communications Security, Alexandria, Virginia, USA, pp.598–09, 2007.
    [6]
    Y. M. Li and F. T. Zhang, “An efficient certificate-based data integrity auditing protocol for cloud-assisted WBANs,” IEEE Internet of Things Journal, vol.9, no.13, pp.11513–11523, 2022. DOI: 10.1109/JIOT.2021.3130291
    [7]
    C. Wang, S. S. M. Chow, Q. Wang, et al., “Privacy-preserving public auditing for secure cloud storage,” IEEE Transactions on Computers, vol.62, no.2, pp.362–375, 2013. DOI: 10.1109/TC.2011.245
    [8]
    B. Wang, B. Li, H. Li, et al., “Certificateless public auditing for data integrity in the cloud,” in Proceedings of IEEE Conference on Communications and Network Security, National Harbor, MD, USA, pp.136–144, 2013.
    [9]
    F. Armknecht, J. M. Bohli, G. O. Karame, et al., “Outsourced proofs of retrievability,” in Proceedings of ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, Arizona, USA, pp.831–843, 2014.
    [10]
    S. K. Nayak and S. Tripathy, “SEPDP: Secure and efficient privacy preserving provable data possession in cloud storage,” IEEE Transactions on Services Computing, vol.14, no.3, pp.876–888, 2021. DOI: 10.1109/TSC.2018.2820713
    [11]
    Y. N. Li, Y. Yu, G. Min, et al., “Fuzzy identity-based data integrity auditing for reliable cloud storage systems,” IEEE Transactions on Dependable and Secure Computing, vol.16, no.1, pp.72–83, 2019. DOI: 10.1109/TDSC.2017.2662216
    [12]
    Z. Yang, W. Y. Wang, Y. Huang, et al., “Privacy-preserving public auditing scheme for data confidentiality and accountability in cloud storage,” Chinese Journal of Electronics, vol.28, no.1, pp.179–187, 2019. DOI: 10.1049/cje.2018.02.017
    [13]
    M. Armbrust, A. Fox, R. Griffith, et al., “A view of cloud computing,” Communications of the ACM, vol.53, no.4, pp.50–58, 2010. DOI: 10.1145/1721654.1721672
    [14]
    T. Wu, G. M. Yang, Y. Mu, et al., “Privacy-preserving proof of storage for the pay-as-you-go business model,” IEEE Transactions on Dependable and Secure Computing, vol.18, no.2, pp.563–575, 2021. DOI: 10.1109/TDSC.2019.2931193
    [15]
    T. G. Zimmerman, “Personal area networks: near-field intrabody communication,” IBM Systems Journal, vol.35, no.3.4, pp.609–617, 1996. DOI: 10.1147/sj.353.0609
    [16]
    G. Ateniese, S. Kamara, and J. Katz, “Proofs of storage from homomorphic identification protocols,” in Proceedings of International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, pp.319–333, 2009.
    [17]
    A. Juels and B. S. Kaliski, “PORs: Proofs of retrievability for large files,” in Proceedings of ACM Conference on Computer and Communications Security, Alexandria, Virginia, USA, pp.584–597, 2007.
    [18]
    H. Shacham and B. Waters, “Compact proofs of retrievability,” in Proceedings of International Conference on the Theory and Application of Cryptology and Information Security, Melbourne, Australia, pp.90–107, 2008.
    [19]
    D. Boneh, B. Lynn, and H. Shacham, “Short signatures from the weil pairing,” in Proceedings of International Conference on the Theory and Application of Cryptology and Information Security, Gold Coast, Australia, pp.514–532, 2001.
    [20]
    H. Q. Wang, Q. H. Wu, B. Qin, et al., “Identity-based remote data possession checking in public clouds,” IET Information Security, vol.8, no.2, pp.114–121, 2014.
    [21]
    Y. Yu, M. H. Au, G. Ateniese, et al., “Identity-based remote data integrity checking with perfect data privacy preserving for cloud storage,” IEEE Transactions on Information Forensics and Security, vol.12, no.4, pp.767–778, 2017. DOI: 10.1109/TIFS.2016.2615853
    [22]
    J. G. Li, H. Yan, and Y. C. Zhang, “Certificateless public integrity checking of group shared data on cloud storage,” IEEE Transactions on Services Computing, vol.14, no.1, pp.71–81, 2021.
    [23]
    D. B. He, N. Kumar, S. Zeadally, et al., “Certificateless provable data possession scheme for cloud-based smart grid data management systems,” IEEE Transactions on Industrial Informatics, vol.14, no.3, pp.1232–1241, 2018. DOI: 10.1109/TII.2017.2761806
    [24]
    Y. N. Qi, X. Tang, and Y. F. Huang, “Enabling efficient batch updating verification for multi-versioned data in cloud storage,” Chinese Journal of Electronics, vol.28, no.2, pp.377–385, 2019. DOI: 10.1049/cje.2018.02.007
    [25]
    G. Prakash, M. Prateek, and I. Singh, “Secure public auditing using batch processing for cloud data storage,” in Proceedings of International Conference on Smart System, Innovations and Computing, Jaipur, India, pp.137–148, 2018.
    [26]
    D. B. He, S. Zeadally, and L. B. Wu, “Certificateless public auditing scheme for cloud-assisted wireless body area networks,” IEEE Systems Journal, vol.12, no.1, pp.64–73, 2018. DOI: 10.1109/JSYST.2015.2428620
    [27]
    C. M. Tang and X. J. Zhang, “A new publicly verifiable data possession on remote storage,” The Journal of Supercomputing, vol.75, no.1, pp.77–91, 2019. DOI: 10.1007/s11227-015-1556-z
    [28]
    X. J. Zhang, J. Zhao, C. X. Xu, et al., “CIPPPA: Conditional identity privacy-preserving public auditing for cloud-based WBANs against malicious auditors,” IEEE Transactions on Cloud Computing, vol.9, no.4, pp.1362–1375, 2021. DOI: 10.1109/TCC.2019.2927219
    [29]
    Y. Zhang, J. Yu, R. Hao, et al., “Enabling efficient user revocation in identity-based cloud storage auditing for shared big data,” IEEE Transactions on Dependable and Secure Computing, vol.17, no.3, pp.608–619, 2020.
    [30]
    A. Rehman, L. Jian, M. Q. Yasin, et al., “Securing cloud storage by remote data integrity check with secured key generation,” Chinese Journal of Electronics, vol.30, no.3, pp.489–499, 2021. DOI: 10.1049/cje.2021.04.002
    [31]
    L. X. Huang, J. L. Zhou, G. X. Zhang, et al., “Certificateless public verification for data storage and sharing in the cloud,” Chinese Journal of Electronics, vol.29, no.4, pp.639–647, 2020. DOI: 10.1049/cje.2020.05.007
    [32]
    Y. J. Wang, Q. H. Wu, B. Qin, et al., “Identity-based data outsourcing with comprehensive auditing in clouds,” IEEE Trans. on Information Forensics and Security, vol.12, no.4, pp.940–952, 2017. DOI: 10.1109/TIFS.2016.2646913
    [33]
    H. Yan, J. G. Li, J. G. Han, et al., “A novel efficient remote data possession checking protocol in cloud storage,” IEEE Transactions on Information Forensics and Security, vol.12, no.1, pp.78–88, 2017. DOI: 10.1109/TIFS.2016.2601070
    [34]
    S. Thokchom and D. K. Saikia, “Privacy preserving integrity checking of shared dynamic cloud data with user revocation,” Journal of Information Security and Applications, vol.50, article no.102427, 2020. DOI: 10.1016/j.jisa.2019.102427
    [35]
    A. De Caro and V. Iovino, “jPBC: Java pairing based cryptography,” in Proceedings of IEEE Symposium on Computers and Communications, Kerkyra, Corfu, Greece, pp.850–855, 2011.
  • Related Articles

    [1]CHENG Le, CHANG Lyu, SONG Yanhong, WANG Haibo, XU Yihan, BIAN Yuetang. A Bionic Optimization Technique with Cockroach Biological Behavior[J]. Chinese Journal of Electronics, 2021, 30(4): 644-651. DOI: 10.1049/cje.2021.05.006
    [2]YAN Yan, MA Hongzhong, LI Zhendong. An Improved Grasshopper Optimization Algorithm for Global Optimization[J]. Chinese Journal of Electronics, 2021, 30(3): 451-459. DOI: 10.1049/cje.2021.03.008
    [3]WANG Hongbo, YANG Fan, TIAN Kena, TU Xuyan. A Many-Objective Evolutionary Algorithm with Spatial Division and Angle Culling Strategy[J]. Chinese Journal of Electronics, 2021, 30(3): 437-443. DOI: 10.1049/cje.2021.03.006
    [4]LIU Dandan, HUANG Cong, WANG Wenbo, GUO Wenbin. Resource Allocation in High Energy-Efficient Cooperative Spectrum Sharing Communication Networks[J]. Chinese Journal of Electronics, 2016, 25(4): 768-773. DOI: 10.1049/cje.2016.07.013
    [5]GAO Yang, CHENG Yuhu, WANG Xuesong. A Quick Convex Hull Building Algorithm Based on Grid and Binary Tree[J]. Chinese Journal of Electronics, 2015, 24(2): 317-320. DOI: 10.1049/cje.2015.04.015
    [6]SU Yinjie, JIANG Lingge, HE Chen. Dynamic Decode-and-Forward Relaying with Partial CSIT and Optimal Time Allocation[J]. Chinese Journal of Electronics, 2015, 24(1): 193-198.
    [7]WANG Xuesong, CHENG Yuhu, JI Jie. Semi-Supervised Regression Algorithm Based on Optimal Combined Graph[J]. Chinese Journal of Electronics, 2013, 22(4): 724-728.
    [8]GENG Xuan, XIE Hong, CAO Fang. Robust THP Transceiver Optimization under Imperfect CSI with Spatial Correlation[J]. Chinese Journal of Electronics, 2013, 22(2): 387-390.
    [9]SHU Yongan, SHU Ziyu, LUO Bin. A Multipath Routing Protocol in Wireless Mesh Networks[J]. Chinese Journal of Electronics, 2012, 21(1): 131-136.
    [10]BAI Jian, FENG Xiangchu. Image Denoising and Decomposition Using Non-convex Functional[J]. Chinese Journal of Electronics, 2012, 21(1): 102-106.

Catalog

    Figures(8)  /  Tables(3)

    Article Metrics

    Article views (716) PDF downloads (66) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return