7,560
Views
1
CrossRef citations to date
0
Altmetric
Research Article

A collaborative auditing scheme with dynamic data updates based on blockchain

, , , &
Article: 2213863 | Received 27 Nov 2022, Accepted 10 May 2023, Published online: 05 Jun 2023

Abstract

Cloud data auditing is essential to ensure the integrity of cloud data. The main idea of cloud auditing is to entrust the audit task to a third-party auditor (TPA) with powerful computing ability. However, TPA may lead to data leakage and become the single point of failure. Recently, blockchain has been introduced to solve these problems by TPA. However, the dynamic storage structure developed by traditional cloud storage does not apply to the blockchain. This paper proposes a blockchain-based collaborative public auditing scheme for dynamic data. We design the cloud service provider(CSP) to generate a challenge set using the latest block hash. It does not need to interact with the blockchain in the challenge phase, dramatically reducing communication overhead. In addition, considering economic factors, we allow users to seek partners to reduce audit costs. The EigenTrust model evaluates the reputation of each user's audit behaviour, effectively avoiding the probability of malicious users participating. For data update, we introduce the Pseudo Index Linked List(PIL) index management structure, which reduces the size of the index management structure to adapt to the blockchain's characteristics and makes the update operation have a constant time complexity. Through a complete security analysis and performance evaluation, we proved the security and effectiveness of the scheme.

1. Introduction

With the development and popularisation of intelligent devices, massive intelligent terminal devices connected to the network generate large-scale data. While people enjoy the convenience of big data, managing data has become a problem. As the amount of data explodes, so does the storage and computation overhead. This puts much pressure on local users with limited resources (Armbrust et al., Citation2010). Based on the convenience and low cost of data storage and computing mode provided by cloud computing services, more and more users are willing to store data in outsourcing services to reduce the cost of data management for local users.

Although data outsourcing services bring convenience to user data management, there are many problems with privacy leakage (Velte et al., Citation2010). Compared with traditional network applications, storage in the cloud will lead to the separation of ownership and management of user data. The cloud service platform can perform any operation on user data. These outsourced data contain user privacy information, such as e-commerce, medical, insurance, etc. Once external attackers carry out malicious attacks on the cloud service platform or the evil behaviour of the cloud service platform itself, severe risks of user data privacy leakage will occur (Kandukuri & Rakshit, Citation2009). However, users cannot use traditional technologies to verify the integrity of outsourcing data because traditional technologies need to download all data when they need to be confirmed (Katz & Lindell, Citation2008; Liu et al., Citation2009). Therefore, the outsourcing service must have an efficient audit mechanism to ensure the integrity and security of user data.

So far, many schemes have been proposed to overcome this problem. In 2007, Ateniese et al. (Citation2007) proposed the first provable data possession(PDP) model, allowing users to check remote outsourced data's integrity without downloading the entire data. The PDP model uses homomorphic authentication tags to aggregate multiple tag values, effectively reducing the protocol's communication overhead. At the same time, Juels and Kaliski  (Citation2007) also proposed a proof of retrievability (PoR) model. Unlike the PDP model, the PoR model guarantees data integrity verification and restores corrupted data. PDP/PoR can also be extended with various features (Dodis et al., Citation2009; Sebé et al., Citation2008). Because the user and the CSP do not trust each other, the user needs to audit the data frequently, which can add a lot of overhead. Wang et al. (Citation2009) first introduced third-party auditors (TPA) to realise the public audit scheme and used BLS signature technology to construct homomorphic tags to solve the above problems. Introducing TPA reduces unnecessary overhead burdens for users, and this audit mode is widely promoted. However, such schemes introducing TPA have serious security issues, as TPA may pry into users' private data during the integrity verification process, leading to the risk of data privacy leakage. So Wang et al. (Citation2010a) proposed a data audit scheme that combines homomorphic authenticators with random masks. This scheme uses random mask technology to package the integrity proof during the data audit process, effectively blinding essential data information in the integrity proof, making it impossible for TPA to obtain relevant information about user data. Begam et al. (Citation2012) also proposed a collaborative PDP scheme that utilises homomorphic verifiable responses and index hierarchy to achieve batch auditing in multi-cloud environments. Afterwards, many scholars proposed various data integrity audit schemes with the TPA(Shen et al., Citation2017; Wu et al., Citation2019; Yan & Gui, Citation2021; Yang et al., Citation2020). But, these TPA-based solutions are always subject to problems caused by centralised audits, such as single points of failure and TPA performance constraints. More importantly, centralised TPA cannot meet the requirements of decentralised scenarios.

However, putting critical audit tasks on the TPA violates the principle that power cannot be concentrated in one party. Blockchain is tamper-proof, unerasable, and transparent, which naturally meets the requirements of data integrity verification and provides a new idea for solving data integrity verification. Most decentralised audit schemes rely on blockchain networks to act as impartial public auditors (Francati et al., Citation2021; Kopp et al., Citation2016; Labs, Citation2017). Blockchain-based audit schemes can better address security risks from CSPs and TPAs. Nevertheless, it still faces many challenges in further research. On the one hand, applying blockchain technology to the audit scheme does not reduce the interaction between blockchain and CSP during the audit process. Wang et al. (Citation2020) proposed the concept of non-interactive Public Provable Data Ownership (NI-PPDP). CSPs in NI-PPDP generate challenge subsets and proofs with the latest block hash values, and the deployed smart contracts will be audited regularly. Therefore, the CSP does not need to interact with the blockchain multiple times during the audit process, significantly reducing communication overhead. On the other hand, the wide application of blockchain makes its use more and more expensive. The market price of Ether is more than 1000 dollars, and Bitcoin is more than 10000 dollars. It is intolerable to the average user. Therefore, Li et al. (Citation2022) proposed a blockchain-based cross user data sharing audit scheme. This scheme allows users to find partners through broadcast audit requirements, cooperate through Diffie-Hellman exchange technology, and use shared keys to complete audits to improve efficiency.

For dynamic data auditing, the availability of the outsourcing service platform should be reflected in allowing users to update their data efficiently. Academia has proposed a series of dynamic audit schemes for traditional cloud storage (Ateniese et al., Citation2008; Erway et al., Citation2015; Wang et al., Citation2010b; Zhu et al., Citation2011), but most are not suitable for on-chain audit schemes. The storage nature of blockchain limits the size of the data on the chain. Each block on the chain can only hold a limited amount of data, and as the blockchain grows, the amount of space each block can store will only get smaller. However, the memory required for dynamic proof in the common dynamic audit scheme is unacceptable for on-chain audits. Recently, research on on-chain dynamic auditing has been continuously carried out. Campanelli et al. (Citation2020) propose that limit vector commitment is beneficial for scaling large-scale, highly decentralised networks (such as the dynamic state of blockchain), which can guarantee small proof sizes and low computational requirements. Duan et al. (Citation2022) also proposed an efficient index information management structure, dramatically reducing the index state while preserving compact proofs, making it more suitable for on-chain auditing.

This paper proposes a dynamic, collaborative audit scheme based on blockchain to solve the above problems. We design the CSP to generate a subset of challenges based on factors beyond its control and send the evidence to the smart contract, which reduces the communication cost in the audit phase. Users broadcast audit information on the blockchain to find a partner to achieve a collaborative data audit. We introduced the EigenTrust model (Kamvar et al., Citation2003), which users evaluate. This model uses each audit result to evaluate users' trust value. After both parties have reached a cooperation, we use the key exchange protocol to generate shared public and private keys between trusted users who have reached cooperation. In addition, we introduce the PIL index management structure (Duan et al., Citation2022) to realise the full dynamic operation of data. The main contributions of this paper are summarised as follows:

  • We propose a collaborative audit scheme base on blockchain and support dynamic data. The scheme allows users to broadcast to find partners to share the audit cost. The CSP does not need interaction during ‘challenge-response’ and generates a challenging subset independently. Meanwhile, the scheme uses intelligent contracts to effectively set up a reward and punishment mechanism to curb malicious behaviour.

  • We use the EigenTrust reputation model to evaluate users' reputations. Before each audit cooperation is reached, the system needs to evaluate the reputation value of the participating users, which can prevent users with low reputation value from building a highly trusted user group.

  • We use the PIL index management structure to reduce the index state size and better adapt to the limited space on the chain. The update operation only requires a constant time complexity, which also speeds up the efficiency of dynamic operation.

This paper is an expanded version of our previous paper published on CSS2022 (Xiao et al., Citation2022). The difference between this paper and the conference version is as follows: First, we reorganise the introduction and related work in Section 1. Second, we introduce the PIL structure for dynamic updates. We explained the PIL in Section 2 and expanded the data update phase in Section 4. Finally, we compare computing and communication costs between our scheme and other schemes in Section 6.1. We also added comparative experiments for dynamic structures in Section 6.2.

1.1. Related work

With all kinds of data leakage accidents emerging, verifying whether the cloud data is complete effectively is necessary. One of the earliest works related to data integrity verification was the Provable Data Possession (PDP) model proposed by Ateniese et al. (Citation2007). In this scheme, a probability model is used to determine the minimum number of sampling blocks required to detect damaged blocks so that users can complete the search for damaged file blocks without verifying all data blocks. At the same time, a homomorphic authentication tag based on the RSA signature is used to generate proof of ownership under the aggregation property, significantly reducing communication overhead. Juels and Kaliski  (Citation2007) proposed the proof of retrievability (PoR) model in the same year. They generated several redundant data blocks in the scheme. These redundant data blocks, called ‘Sentinels,’ are inserted into encrypted files. The file owner can determine whether the file is damaged by challenging the location of these ‘Sentinels’ data blocks. PDP and PoR differ because the latter can recover the detected damaged data blocks.

Since the PDP and PoR schemes were put forward, with the joint efforts of scholars at home and abroad, data integrity verification schemes with various characteristics have been constantly proposed, which meet the needs of different users and scenarios. The features of the current data integrity verification scheme can be summarised into several categories: dynamic data update, multiple copies, privacy protection, blockchain applications, etc. The data integrity verification for static data mainly includes the RSA-based PDP scheme proposed by Ateniese and the BLS-based PDP scheme proposed by Wang et al. (Citation2011). This type of scheme provides a basic framework for data integrity verification schemes. The BLS signature, aggregated data tags, and sampling strategy based on probability analysis involved in the scheme have profoundly impacted subsequent schemes.

The data integrity verification schemes supporting dynamic update include the scheme based on the rank table proposed by Erway et al. (Citation2015), the scheme based on Merkle Tree(MHT) proposed by Wang et al. (Citation2010b), and several schemes based on the above scheme to improve the data structure. Liu et al. (Citation2014) combined the idea of computing a data block index based on hierarchical information designed in the Erway scheme with the Merkle Tree data structure and proposed a dynamic update scheme based on the MR-MHT authentication structure. This scheme can resist the substitution attack caused by the lack of storage of relevant data block index in the leaf node in the MHT scheme. Zhu et al. (Citation2012) introduced an index hash table structure to record the changes in each data block. However, the scheme must modify all data labels after the operation bit when inserting or deleting, which is inefficient. Based on the above problems, Tian et al. (Citation2015) proposed a new tag structure based on a dynamic hash table. The pointer connects the file element and the block element. Pointers connect the file and block elements in the new structure. The insertion or deletion operation only needs to change the pointers, effectively reducing communication costs and improving efficiency. Most data integrity verification schemes introduce dynamic structures to support dynamic operations, but dynamic structures require a certain amount of storage space.

To prevent the CSP from being attacked or sending failures that cause permanent data loss, Curtmola et al. (Citation2008) proposed the first multi-copy verification mechanism, which uses a single tag to verify any copy. Liu et al. (Citation2014) proposed a multi-replica dynamic public audit scheme, combining the hash values of the file blocks in all replicas into a Merkle hash tree, reducing the update cost. However, the storage space of the Merkle hash tree increases with the number of replicas, bringing a burden to CSP. Yaling and Li (Citation2020) introduced a multi branch tree to achieve dynamic update, simplifying the verification structure. In these schemes, the generation of data copies and labels is completed by users, which increases the user's computing pressure.

Wang et al. (Citation2011) solved the problem by using random mask technology to prevent third-party auditors from disclosing users' privacy. This scheme can prevent data privacy disclosure in the process of proof calculation. Wang et al. (Citation2014) used the concept of ring signature to construct a homomorphic authenticator and named it ‘Oruta’, making TPA and CSP unable to know the data. Kumar (Citation2020) proposed a system to enhance data privacy protection. Before data is stored in the cloud, the RSA and AES algorithms encrypt data. The user will send the hash-based message authentication code (HMAC) to the TPA. The TPA performs the cloning procedure followed by the CSP and audit data with the SHA-512 algorithm. Susilo et al. (Citation2022) proposed a cloud data audit scheme in which tags are not generated from block tags but are bunch tags, enabling integrity proofs to reduce the number of bits while maintaining privacy.

Although some of the above documents have taken measures to protect data security, they cannot wholly solve TPA's single point of failure, performance defects and security threats. The decentralised architecture of blockchain technology can make data audit free from dependence on trusted third parties. The introduction of blockchain into cloud data migration can effectively verify data integrity. Zikratov et al. (Citation2017) proposes a blockchain private chain model called Zeppar, which uploads the file hash value to the blockchain, and judges whether the data is complete by comparing the hash value. Tian et al. (Citation2021) proposed a blockchain-based dual server shared audit scheme to prevent data users from losing data under a single point of failure and repeated forgery attacks. Zhang et al. (Citation2021) proposed a multi-cloud storage data audit scheme based on blockchain to protect data integrity and accurately arbitrate service disputes. Xie et al. (Citation2022) proposed a blockchain-based data outsourcing storage protocol that enables users to obtain fine-grained compensation based on data corruption. Yuan et al. (Citation2020) proposed a blockchain-based public audit and secure data deduplication scheme with fair arbitration, using smart contracts to punish malicious CSPs automatically. Zhang et al. (Citation2019) proposed a certificateless public authentication scheme based on blockchain technology to delay auditors. Liang et al. (Citation2022) proposed a data privacy protection scheme based on federated blockchain to achieve access control based on ciphertext policy attribute encryption. In the scheme, the blockchain interacts with the distributed dedicated cluster on the chain and uses smart contracts to complete data access audits. Liang et al. (Citation2020) also applied blockchain in combination with circuit copyright protection. The architecture proposed by Li et al. (Citation2020) eliminates the third-party audit and only has the data owners who do not trust each other. In the scheme, the data owner stores lightweight tags on the blockchain and proves the integrity of cloud data by building MHT. To make the ‘challenge-response’ mechanism in the on-chain audit more efficient, Wang et al. (Citation2020) proposed the concept of non-interactive public provable data possession (NI-PPDP), which uses smart contracts to enable CSP to generate audit challenges and proofs automatically, and then send them to the blockchain. NI-PPDP reduces the interaction time of the verification process, reduces users' work, and makes the system more efficient. To ensure the continuous availability and privacy protection of outsourced data, Huang et al. (Citation2022) proposed a non-interactive zero-knowledge audit scheme based on blockchain. The scheme uses Fiat-Shamir to support the challenge of sequential generation in the protocol, which not only realises non-interactive audits based on blockchain but also ensures the continuity and integrity of outsourced data. From an economic perspective, some scholars (Li et al., Citation2022) also proposed a cross-user data sharing audit scheme based on blockchain, allowing users to find partners for audit to share the audit cost and improve the overall system efficiency. However, the credibility of partners is not considered in the scheme. The audit quality will be reduced if malicious users join the audit process.

The rest of this paper is organised as follows. We introduce the relevant preliminary knowledge in Section 2. We describe the system model, adversary model, and design goals in Section 3. In Section 4, we present a blockchain-based collaborative auditing scheme for dynamic data. In Section 5, we explained the scheme's correctness and analysed the scheme's security according to the adversary model. In Section 6, we provided a performance evaluation. In Section 7, we give the conclusion.

2. Preliminaries

2.1. EigenTrust model

The trust model is mainly used for peer trust in P2P networks. EigenTrust is a global trust model proposed by Kamvar et al. It calculates the global trust of each node by iterating on the mutual evaluation between neighbour nodes in the network. In the system, the node evaluates the counterparty's reputation according to the number of satisfied or dissatisfied transactions. It provides a security mechanism to restrain the dishonest services of the node.

2.1.1. Local trust values

In P2P networks, nodes interact with each other due to various service requests. The node requesting the service is referred to as the requester, and the node responding to the request is referred to as the responder. In this article, the requester refers to the data owner who initiated the cooperative broadcast, while the responder refers to the data owner who responded. Assuming the number of nodes in the network is n, when the requester node i and responder node j complete a transaction, node i makes a satisfaction evaluation of the service provided by node j. If node i is satisfied with the service provided by node j, record it as sat(i,j)=1. On the contrary, if dissatisfied with the service provided, it is recorded as unsat(i,j)=1. Based on the transaction evaluation between node i and node j, the local reputation evaluation S(i,j) of node i on node j can be obtained, and the calculation formula is: S(i,j)=sat(i,j)unsat(i,j)Standardize the original local trust value to obtain the normalised local trust value. The purpose is to keep the trust value between 0 and 1, so those dishonest nodes cannot conspire with partners by exaggerating the trust value. The normalisation method is as follows: C(i,j)={max(S(i,j),0)jmax(S(i,j),0),ifjmax(S(i,j),0)0;pi,otherwise

P is a trusted group. If the node i belongs to the set of trusted nodes, then defined pi=1|P| if iP, otherwise pi=0. Finally, each node has a normalised local reputation value of C(i,j) for the other nodes, resulting in a matrix C=[C(i,j)] for the normalised local reputation value.

2.1.2. Recommended trust value

The local reputation value can only be used to represent the reputation generated when direct transactions between nodes cannot reflect the global reputation value of nodes. When there is no direct transaction between node i and node k, node i can calculate the trust value of node k by asking for the local reputation value of the node that has direct transactions with node k, which the formula can express: t(i,k)=jC(i,j)C(j,k)We use matrix operations to represent the process of inquiring about local reputation values: ti=CTciAmong them, ci is the vector of matrix element C(i,k), and ti is the vector of matrix element t(i,k). However, the trust value reflected by ti is only reflected in the immediate neighbours of node i and cannot represent the global reputation value. To obtain a more comprehensive global reputation value, node i get a broader perspective by asking neighbours, which is ti=(CT)2ci. Similarly, ti=(CT)nci can be obtained through multiple iterations. Finally, ti converge to the primary eigenvector t of matrix C=[C(i,j)]. t can be represented as the global reputation value of each node in the network.

2.2. Pseudo index linked list(PIL)

In the cloud audit article, we divide the file into data blocks and generate tags for each data block(e.g. σi=(H(name||i)umi)x, where name is the file name, mi is the i-th data block, and x is the user's private key). Each tag σi contains index i and data block mi. The user's private key x signs the tags, so it cannot tamper with. Any complete data block and tag can be approved if the tag does not contain index information. The function of binding index and data block information is to prevent CSP from cheating auditors with other data block information. However, in the previous index management structure, when adding or deleting data blocks, it is necessary to recalculate all data block tags after the update location, leading to high costs and low efficiency of dynamic updates.

We introduce a PIL structure (Duan et al., Citation2022) to maintain the mapping between indexes and data blocks. PIL is an on-chain dynamic index mapping structure, significantly reducing the index state and update cost. To reduce the update of index information caused by block update, PIL binds the block with insensitive pseudo index information. Figure (a) shows the structure of a PIL. A PIL can be regarded as a simple linked list. Each entry stores the pseudo index information of a data block. Each entry contains a pointer and timestamp, where the pointer points to the pseudo-index information of the next logical data block, the shaded entry represents the header of the PIL, and the pointer free points to the empty entry. The advantage of this designed index structure is that it only requires constant insert and delete complexity. This dramatically reduces the cost compared to the O(N)complexity required by other index management structures.

Figure 1. Struct of PIL for handling data dynamics.

Figure 1. Struct of PIL for handling data dynamics.

Operations on the PIL include insert, delete, and modify. As shown in Figure (b), when a new data block is inserted into logical position 3, the pseudo index information of the new data block is written to the entry pointed to by the freepointer. The entry content is (1,T1), where 1 represents the entry position where the pseudo index information of the original logical position 3 is located, and T1 represents the current time. We also need to modify the information in the entry where the logical position 2 data block is located to (4,T1), where 4 represents the position of the entry where the newly inserted data block is found, and T1 represents the current time. As shown in Figure (c), when the data block at logical position 5 is deleted, the information of the PIL entry at logical position 5 will be deleted, and the free pointer will point to that entry. That is, the free pointer in the figure will point to the fifth entry. Then, modify the pseudo index information of logical position 4 to (6,T2), where 6 represents the entry position where the pseudo index information of logical position 6 is located, and T2 represents the current time. As shown in Figure (d), When the data block at logical position 1 is to be modified, the modified entry information is (3,T3), where the number part is unchanged, and T3 represents the current time. When modifying a data block, the PIL entry only needs to alter the timestamp information of the entry.

3. Problem statement

3.1. System model

The dynamic collaborative audit scheme based on blockchain includes three entities: Data Owner, Cloud Service Provider, and Blockchain. As shown in Figure , each entity is defined as follows:

  • Data Owner(DO): Due to limited computing, communication, and storage resources, data needs to be stored in the CSP. Smart contracts are deployed on the blockchain as audit and dynamic operation requests.

  • Cloud Service Provider(CSP): It has a large amount of storage space and computing capacity and is responsible for storing DO data and responding to dynamic operation and audit requests.

  • Blockchain: It has powerful computing resources to help DO verify the certificate returned by CSP. If the CSP proves the integrity of the data, the reward and punishment mechanism will be implemented according to the smart contract.

Figure 2. The system model.

Figure 2. The system model.

3.2. Adversary model

This scheme confirms the completeness and security of DO cloud data and prevents CSP from dishonesty. In addition, it also needs to meet the DO's request to update cloud data dynamically. Assume that CSP is considered dishonest, so this scheme mainly considers the following three attack types:

  1. Replacing attack: CSP attempts to replace challenged data blocks and tags with other undamaged data blocks and tags combinations to pass the audit.

  2. Forgery attack: CSP forges audit certificates to cheat smart contracts for its interests or to protect its reputation.

  3. Replay attack: CSP does not need to query the stored data and meets the current challenge by saving the previously verified proofs.

3.3. Design goals

Under the above attacks, our proposed scheme meets the following security goals:

  • Soundness(Data integrity): The scheme ensures that only the complete data of DO stored by CSP can pass the audit verification of smart contract.

  • Privacy preserving: The scheme ensures data security during cloud storage, and malicious third parties cannot obtain data information.

  • Public auditability: Anyone (on the blockchain) can verify the correctness and integrity of DO data stored in the cloud.

  • Accountability: Accountability provides a mechanism for CSPs and DOs to be accountable for their actions. In this paper, DO entrusts data to CSP by paying storage fees. Similarly, CSP will bear specific responsibilities (pay fines) if it does not guarantee data integrity.

  • Collaborative auditing: The scheme uses the EigenTrust model to evaluate the reputation of users, establish high-quality user groups, reduce the audit cost of individual users and improve overall efficiency.

4. Scheme description

The DO Ua broadcasts an audit request to find a partner for save service fees before uploads file Fa, and another DO Ub with an audit request by auditing file Fb. If Ub responds to the broadcast message, it indicates cooperation with Ua to complete the audit request. Ua first utilises the EigenTrust reputation system to access the trust val ue of Ub. If Ub's trust value exceeds a preset threshold, the follow-up audit is completed with Ub. Otherwise, the broadcast audit requirement will be continued. After passing the EigenTrust reputation system, Ua and Ub generate a pair of public and private key PK, SK by Diffie-Hellman key exchange protocol. It is exclusively used to audit Fa and Fb. Then, Ua and Ub chunk the files. Then generate tags for the file chunks and upload to the CSP. At the same time, the tags are uploaded to the blockchain. The smart contract SCa is initiated, the deposit is collected from Ua, Ub, and the CSP, then the final audit contract result determines the destination of the deposit. After the CSP accepts the files and the tags, it simultaneously initiates an audit contract on the blockchain to periodically generate challenges. The CSP generates a set of challenges using challenge seeds based on information from the latest blockchain. The CSP sends proof, challenge set, and challenge seed information to the smart contract SCb. The SCb starts the audit, returns the results to the SCa to pay the service fee or compensation. Our scheme workflow is shown in Figure . For the sake of clarity, the notations used in this paper are shown in Table . The specific scheme steps are as follows:

Figure 3. The workflow of the scheme.

Figure 3. The workflow of the scheme.

Table 1. Notions.

  1. Setup Algorithm: (a) System chooses three multiplicative cyclic groups G1, G2, and GT of order p. Let g be a generator of G2. e:G1×G2GT be a bilinear map. (b) System selects two hash functions h():{0,1}Zp, H():{0,1}G1 and pseudorandom function ζ:{0,1}[1,2n]. (c) The DO Ui selects a random skiZp as private key and public key is pki=gski. The CSP selects a random sskZp as private key and the public key is spk=gssk.

  2. Key-Sharing Algorithm: (a) Ua broadcasts on the blockchain to find partners for joint audits to save costs. (b) When another Ub responds, Ua evaluates Ub with the EigenTrust model for credibility. Refer to Section 2.1 for specific steps. If the reputation value of Ub is lower than the preset threshold, Ua will refuse to cooperate and continue broadcasting in the blockchain. Otherwise, Ua and Ub reached a cooperation intention. (c) Then, both parties use the Diffie-Hellman key exchange protocol to generate public and private keys used in the subsequent audit process. The generation method is as follows: k=(gska)skb=(gskb)ska=gskaskb.(d) Therefore, the shared private key is SK=h(k), and the shared public key is PK=gSK. Ua chooses uG1, and computes e(u,PK). The public parameters are Para=(PK,g,u,e(u,PK),h(),H()).

  3. Tag Generation Algorithm: (a) The Ua firstly divides Fa into Fa=m1m2m3mn. Then, ua calculates tags σi for each data block. σi=(H(Wi)umi)SK,i[1,n]Use ψa={σ1,σ2,,σn} to denote the set of authenticators. Here, Wi=nameaA[pi] and nameaZp is chosen as the identifier of file Fa. A[pi] is a dynamic structure for storing index information of data blocks, pi refers to the pseudo index information of the next block. In the beginning, the data blocks are stored in order, so the pseudo index sequence is 2,3,4,5,…, i+1, that the tag is σi=(H(namea(i+1,t0))umi)SK. After multiple update operations, the PIL changes to A[pi]=(pi+1,t). Ua computes SIGa=nameaSigska(namea) as the file tag for Fa, where Sig() is a signature algorithm. (b) Finally, Ua uploads the {Fa,ϕa={ψa,SIGa},Aa} to the CSP. Meanwhile, Ua encrypts the Aa with the private key and sends it to the blockchain. Ub do in the same way to upload file Fb=m1m2m3mn, tags ϕb={ψb,SIGb} and Ab. (c) Before storing data, CSP must confirm the data's correctness and tags. If correct, the CSP saves the data. Otherwise, it discards and lets the DO resend the information. (d) Ua, Ub, and CSP sign a smart contract SCa. Each of the three parties pays a certain amount to SCa as a deposit. As shown in algorithm 1, the input of SCa includes {cycletime,price}. cycletime is a audit cycle and price is the number of deposit. SCa will immediately operate on the deposit when receiving the result of SCb. When the result returned by SCb is ‘success,’ CSP will receive deposit from Ua and Ub for the service charge. When the result returned by SCb is ‘fail,’ the CSP deposit will be sent to Ua and Ub as compensation.

  4. Challenge Algorithm: (a) When the CSP stores data, the audit smart contract SCb is triggered. Since then, audit tasks have been automatically executed in an audit cycle. (b) Scheme design generates challenge sets by CSP. Because the factors that generate the challenge set include parts that cannot be controlled by CSP, such as the latest block hash value, the correctness of the challenge set can be guaranteed. CSP chooses an integer c [1,2n] randomly,and computes si=ζ(τ time i),i[1,c]. τ is the lastest block hash in the blockchain and time is the current timestamp. S={s1,s2,s3,,sc} is the current challenge set.

  5. ProofGen Algorithm: (a) The CSP computes νi=h(τtimei),i[1,c] base on the challenge set S. Then, CSP computes σ=i=1cσsiνi and μ=i=1cνimsi. (b) The CSP chooses rZp, and calculates R=e(u,PK)r,RGT, Then, it computes μ=r+γμ, where γ=h(R)Zp. c) The CSP send prove=(μ,σ,R,τ,c) to the auditing smart contract SCb for the correctness of the data storage.

  6. Verify Algorithm: (a) Based on the challenge set S, the audit smart contract SCb starts to be validated. As shown in algorithm 2, SCb rebuilds the challenge set S={ζ(τtime1),ζ(τtime2),,ζ(τtimec)} by searching τ on the blockchain, and computes γ=h(R). (b) As shown in algorithm 2, the verified equation is follow: Re(σγ,g)=?e((i=1cH(Wsi)νi)γuμ,PK).The equation output ‘success’ or ‘fail’, the result will return to smart contract SCa. (c) After audit, Ua and Ub can evaluate the reputation according to the other performance in the process.

  7. Update Algorithm:

    Considering the storage particularity of the blockchain, we use the PIL structure to update outsourcing data, including the following three operations dynamically:

    Insertion: When a new data block m is to be inserted at the logical index j, the user calculates the new data block tag σ. Meanwhile, the user needs to update the PIL, the pseudo index of the new data block into the entry pointed by free, A[free]=(pj+1,t), and modify A[pj1]=(freed,t),freed=pj+1. Next, the user generates a signature SInsert on (A,m,σ). Finally, user sends (j,A,SInsert) to blockchain and (j,m,σ,SInsert) to CSP. After receiving, the CSP updates the local copy of A, verifies the correctness of the tag σ corresponding to m. If it is correct, CSP commit to the update change and send message to the blockchain.

    Deletion: When deleting the j-th data block, the user modify the pseudo index of pj1 in the PIL to pj+1, A[pj1]=(pj+1,t) and the free point to pj, freeA[pj]. Next, the user generates a signature Sdeletion on A. Finally, user sends (j,A,Sdeletion) to blockchain and CSP. After receiving, the CSP updates the local copy of A and verifies the signature Sdeletion. If it is correct, CSP commit to the update change and send message to the blockchain.

    Modification: When the j-th data block mj modified to mj, the user calculates the new tag σ. Then, the user update the PIL, the pseudo index of pj is modified to A[pj]=(pj+1,t). User generates a signature Smodify on (A,m,σ). Finally, user sends (j,A,Smodify) to blockchain and (j,m,σ,Smodify) to CSP. After receiving, the CSP updates the local copy of A, verifies the correctness of the tag σ corresponding to m. If it is correct, CSP commit to the update change and send message to the blockchain.

5. Correctness & security analysis

5.1. Correctness

The data integrity verification equation Re(σγ,g)=?e((i=1cH(Wsi)νi)γuμ,PK) in the Verify Algorithm. The verification process is as follows: Re(σγ,g)=e(u,PK)re((i=1cσsiνi)γ,g)=e(u,PK)re((i=1c((H(Wsi)umsi)SK)νi)γ,g)=e(u,PK)re((i=1c(H(Wsi)νiumsiνi)γ,g)SK=e(u,PK)re((i=1c(H(Wsi)νi)γuμγ,g)SK=e(ur,g)SKe((i=1c(H(Wsi)νi)γuμγ,g)SK=e((i=1c(H(Wsi)νi)γuμγ+r,g)SK=e((i=1cH(Wsi)νi)γuμ,PK)

5.2. Security analysis

This section uses theory to prove how the scheme proposed in this paper can resist the three attacks. In the following analysis, other entities are entirely trusted and comply with audit rules, except those mentioned in the three attacks.

Theorem 5.1.

CSP generates audit proof that can not be verified with the other correctly stored data blocks and tags.

Proof. Assuming that the data mi is damaged, another completely stored data is mj, and a data tag σj=(H(Wj)umj)SK. Relevant parameters can be obtained from the blockchain. If CSP is assumed to pass the audit, σi=σjβ,βZp, then there is (1) (H(Wi)umi)SK=(H(Wj)umj)SKβ(1) To make equation (5.1.1) hold, equation (5.1.2) (5.1.3) must hold. (2) H(Wi)=H(Wj)β(2) (3) umi=umjβ(3) From the DLP problem, it is difficult to calculate the random number β when H(Wi) and H(Wj)β are known in equation (5.1.2). CSP can't find β to make mi=mjβ. Therefore, equations (5.1.2) and (5.1.3) cannot be established, and CSP's replacing attacks cannot pass the audit.

Theorem 5.2.

CSP cannot be verified by the forged audit proofs.

Assume that CSP can forge γ and compute proof (μ,σ,R,τ), we have that: Re(σγ,g)=e((i=1cH(Wsi)νi)γuμ,PK)

By the correctness of the scheme, if we have the valid proof (μ,σ,R,τ,c) that will satisfy the verification equation as follow: Re(σγ,g)=e((i=1cH(Wsi)νi)γuμ,PK).We divide by the above two equations to get: e(σγγ,g)=e((i=1cH(Wsi)νi)σγγuμμ,PK)e(σγγ,g)=e((i=1cH(Wsi)νi)γγ,gSK)e(uμμ,gSK)σγγ=(i=1cH(Wsi)νi)SK(γγ)uSK(μμ)(i=1cσsiνi)γγ=(i=1cH(Wsi)νi)SK(γγ)uSK(μμ)uSK(μμ)=(i=1c(σsi/H(Wsi)SK)νi)γγuSKμμ=(i=1c(uSKmsi)νi)γγμμ=(i=1cmsiνi)(γγ)i=1cmsiνi=(μμ)/(γγ)Suppose gx and gy are the elements of CDH problem, CSP can set PK=gx,u=gy. There is at least one Δμi is nonzero, CSP defines Δμi=μiμi,i=1,2,,c. CSP can computes the equation as follows: e(σ/σ,g)=e(i=1cuΔμi,Pk)=e(gi=1cΔμixy,g)Therefore, C can calculate gxy=(σ/σ)1i=1cΔμi. In this way, CSPs can use forged signatures to solve CDH problems. However, the CDH problem is a difficult problem. CSP cannot forge a valid aggregate signature within probabilistic polynomial time.

Theorem 5.3.

CSPs cannot be validated using previous proofs.

CSP generates challenge set S={s1,s2,s3,,sc}. Suppose that when generating the proof, CSP replaces the information mi of data block i with the previous information mi. If the proof generated by CSP can be passed verified, it means that equation H(name||A[pi])=H(name||A[pi]) hold. However, A[pi] will not be identical to the previous one. The dynamic structure A[pi] is stored in the blockchain, so even if the CSP uses the previous proof, it cannot replay the previous proof to pass the verification.

6. Performance evaluation

6.1. Functionality comparisons

This section compares and analyses the computation costs of different entities in this scheme and scheme (Fan et al., Citation2020; Li et al., Citation2022; Xu et al., Citation2020). Then compares and analyses the communication costs in the data upload and audit stages. Table  describes the symbols and meanings used in the analysis of communication and computing costs. For the four schemes, DO generates authentication tags before uploading data, and the required tags computing costs are consistent except (Xu et al., Citation2020) is (2H+2M+3E)×n. The audit process of the four schemes is initiated by DO and participated by TPA or blockchain and CSP. The audit was transferred to TPA for verification in the other three articles to reduce the burden of DO audits. However, CSP and TPA may conspire to falsify audit results in this process. This scheme uses blockchain to resolve TPA's untrustworthiness and realises automatic audits by deploying smart contracts in advance. CSP periodically uses the newly generated block hash to generate a challenge subset to speed up the overall audit efficiency in this scheme. The Table  shows that the overall computing cost of the scheme system in this paper is lower than that in the literature (Xu et al., Citation2020) and slightly increased than in the literature by Li et al. (Citation2022) and Fan et al. (Citation2020). The CSP and blockchain mighty computing power can compensate the large computing overhead. Therefore, the scheme in this paper is more suitable for audit scenarios where DO does not have sufficient computing resources, and there is no trusted third-party entity. In the communication costs, it can be seen from Table  that the overall communication overhead of this scheme system is the lowest. Other schemes require TPA and CSP to interact in the audit to obtain the challenge subset. In this scheme, CSP can generate the challenge subset by itself. In the verification phase, the smart contract can judge the legitimacy of the challenge subset generated by CSP, saving some communication overhead. For scenarios with limited communication resources, this scheme has certain advantages.

Table 2. Notions.

Table 3. Comparisons of computation costs.

Table 4. Comparisons of communication costs.

6.2. Implementation

We present the evaluation of the proposed scheme experiments. DO, and CSP perform on a computer with Intel (R) Core (TM) i7-10710U CPU @ 1.10 GHz processor and 16 GB RAM on the Ubuntu system. We adopted the GMP and PBC library for big integer and pairing operation, and adopted OpenSSL3 for basis cryptographic primitives (e.g. pseudorandom function). We deploy the smart contract to a local simulated network TestRPC. DO and CSP side are written in python, while the auditing contract is written in solidity. In order to analyse the scheme proposed in this paper and ensure the randomness of experimental results, we use randomly generated data as samples.

The comparison of tag generation: As is shown in Figure , we compare the time spent in the tag generation phase with Wang et al. (Citation2020). We set the comparison of tag generation time in different schemes when the file size is equal. In this scheme, users seek partners to participate in the audit, and each user calculates their tags. Therefore, when CSP stores the same amount of data, compared with other schemes, the time for label generation is reduced by half. When the number of blocks generating tags is 500 and the time is 0.0032 s, our scheme performs best.

Figure 4. The comparison of tag generation.

Figure 4. The comparison of tag generation.

Computation costs of proof generation(300 and 460 challenged blocks): In the proof generation experiment, we test the time cost by changing the number of challenge blocks and the size of the data blocks. As shown in Figure , the proof generation time under two different tamper detection data standards is compared. When the number and size of challenge data blocks increase, the time to generate proofs slowly grows. The continuous increase of data block capacity proves that the difference between the 300 and 460 standards is less than 40%.

Figure 5. The computation costs of proof generation.

Figure 5. The computation costs of proof generation.

The comparison of dynamic structures size: Among the dynamic audit schemes of cloud storage, the DHT structure proposed by Tian et al. (Citation2015) is the classic. So we use DHT structure to compare the PIL structure used in this paper. DHT is superior to the previous scheme in block insertion and deletion by the advantages of linked lists, which require a certain amount of pointers. The PIL structure used in this paper binds data blocks with insensitive pseudo index information and presents them in array form, reducing the space consumption caused by pointers. The experiment compares the memory space spent by PIL and DHT when storing indexes of different amounts of data. As shown in Figure , the abscissa represents the number of different stored data blocks, and the ordinate represents the memory overhead of the corresponding index amount. The experimental results show that the required memory overhead increases with increasing data. However, the average memory overhead of PIL under different numbers of inserted data is generally small than DHT. This is because the PIL structure does not use the pointer to link the upper and lower data blocks but the pseudo index information, significantly reducing the memory overhead. Then, the experiment compares the average time overhead spent by PIL and DHT in different data amounts of insertion, deletion and updating. As shown in Figure , the abscissa represents the different numbers of insertion data, and the ordinate represents the time overhead of the data amount corresponding to the insertion operation. The average time overhead of PIL at different numbers of inserted data is overall smaller than DHT because PIL need to modify only the pseudo-index information corresponding to two data blocks at most. The DHT needs to regenerate block information, insert the block information into the specified location, and then link the front and back blocks with pointers. As shown in Figure , the abscissa represents different numbers of deleted data, and the ordinate represents the time cost of the data amount corresponding to the deletion operation. From the experimental results, the average time overhead of PIL in different numbers of deleted data has obvious advantages because it is similar to that in the insertion operation experiment. As shown in Figure , the abscissa represents different numbers of updated data, and the ordinate represents the time overhead of the update operation corresponding to the data amount. Like the above experiments, PIL spent the least time updating different data volumes.

Figure 6. The comparison of dynamic structures size.

Figure 6. The comparison of dynamic structures size.

Figure 7. The comparison of data insertion operations.

Figure 7. The comparison of data insertion operations.

Figure 8. The comparison of data deletion operations.

Figure 8. The comparison of data deletion operations.

Figure 9. The comparison of data modification operations.

Figure 9. The comparison of data modification operations.

7. Conclusion

Considering the security and efficiency problems in the current cloud data public audit scheme, a blockchain based multi-user dynamic public audit scheme is proposed. Entrust audit tasks to smart contracts on the blockchain in a multi-user scenario. The scheme sets up a non-interactive audit mode in the audit stage to facilitate the deployment of the blockchain platform. The scheme introduces the PIL structure for dynamic update of the data blocks to realise the whole dynamic update operation based on the blockchain. This paper mainly studies the blockchain-based data audit scheme and the dynamic data update problems. However, considering practical application scenarios, further research is needed on balancing efficiency and security in blockchain-based dynamic audit schemes.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Science and Technology Project of Putian City [grant number 2021R4001-10], the Presidential Research Fund of Minnan Normal University [grant number KJ18024], the National Social Science Fund of China [grant number 21XTQ015], and the Natural Science Foundation of Fujian Province of China [grant number 2019J01752].

References

  • Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., & Zaharia, M. (2010). A view of cloud computing. Communications of the ACM, 53(4), 50–58. https://doi.org/10.1145/1721654.1721672
  • Ateniese, G., Burns, R., Curtmola, R., Herring, J., Kissner, L., Peterson, Z., & Song, D. (2007). Provable data possession at untrusted stores. In Proceedings of the 14th ACM Conference on Computer and Communications Security (pp. 598–609).
  • Ateniese, G., Di Pietro, R., Mancini, L. V., & Tsudik, G. (2008). Scalable and efficient provable data possession. In Proceedings of the 4th International Conference on Security and Privacy in Communication Netowrks (pp. 1–10).
  • Begam, O. R., Manjula, T., Manohar, T. B., & Susrutha, B. (2012). Cooperative schedule data possession for integrity verification in multi-cloud storage. Int J Modern Eng Res (IJMER), 3, 2726–2741.
  • Campanelli, M., Fiore, D., Greco, N., Kolonelos, D., & Nizzardo, L. (2020). Incrementally aggregatable vector commitments and applications to verifiable decentralized storage. In International Conference on the Theory and Application of Cryptology and Information Security (pp. 3–35).
  • Curtmola, R., Khan, O., Burns, R., & Ateniese, G. (2008). MR-PDP: Multiple-replica provable data possession. In 2008 the 28th International Conference on Distributed Computing Systems (pp. 411–420).
  • Dodis, Y., Vadhan, S., & Wichs, D. (2009). Proofs of retrievability via hardness amplification. In Theory of Cryptography Conference (pp. 109–127).
  • Duan, H., Du, Y., Zheng, L., Wang, C., Au, M. H., & Wang, Q (2022). Towards practical auditing of dynamic data in decentralized storage. IEEE Transactions on Dependable and Secure Computing, 20(1), 708–723. https://doi.org/10.1109/TDSC.2022.3142611
  • Erway, C. C., Küpçü, A., Papamanthou, C., & Tamassia, R. (2015). Dynamic provable data possession. ACM Transactions on Information and System Security (TISSEC), 17(4), 1–29. https://doi.org/10.1145/2699909
  • Fan, K., Bao, Z., Liu, M., Vasilakos, A. V., & Shi, W. (2020). Dredas: Decentralized, reliable and efficient remote outsourced data auditing scheme with blockchain smart contract for industrial IoT. Future Generation Computer Systems, 110, 665–674. https://doi.org/10.1016/j.future.2019.10.014
  • Francati, D., Ateniese, G., Faye, A., Milazzo, A. M., Perillo, A. M., Schiatti, L., & Giordano, G. (2021). Audita: A blockchain-based auditing framework for off-chain storage. In Proceedings of the Ninth International Workshop on Security in Blockchain and Cloud Computing (pp. 5–10).
  • Huang, Y., Yu, Y., Li, H., Li, Y., & Tian, A. (2022). Blockchain-based continuous data integrity checking protocol with zero-knowledge privacy protection. Digital Communications and Networks, 8(5),604–613. https://doi.org/10.1016/j.dcan.2022.04.017
  • Juels, A., & Kaliski Jr, B. S. (2007). PORs: Proofs of retrievability for large files. In Proceedings of the 14th ACM Conference on Computer and Communications Security (pp. 584–597).
  • Kamvar, S. D., Schlosser, M. T., & Garcia-Molina, H. (2003). The eigentrust algorithm for reputation management in P2P networks. In Proceedings of the 12th International Conference on World Wide Web (pp. 640–651).
  • Kandukuri, B. R., & Rakshit, A. (2009). Cloud security issues. In 2009 IEEE International Conference on Services Computing (pp. 517–520).
  • Katz, J., & Lindell, A. Y. (2008). Aggregate message authentication codes. In Cryptographers' Track at the RSA Conference (pp. 155–169).
  • Kopp, H., Bösch, C., & Kargl, F. (2016). Koppercoin–a distributed file storage with financial incentives. In International Conference on Information Security Practice and Experience (pp. 79–93).
  • Kumar, A. (2020). A novel privacy preserving HMAC algorithm based on homomorphic encryption and auditing for cloud. In 2020 Fourth International Conference on I-Smac (IoT in Social, Mobile, Analytics and Cloud)(I-Smac) (pp. 198–202).
  • Labs, P. (2017). Filecoin: A decentralized storage network. https://filecoin.io/filecoin.pdf
  • Li, A., Tian, G., Miao, M., & Gong, J. (2022). Blockchain-based cross-user data shared auditing. Connection Science, 34(1), 83–103. https://doi.org/10.1080/09540091.2021.1956879
  • Li, S., Liu, J., Yang, G., & Han, J. (2020). A blockchain-based public auditing scheme for cloud storage environment without trusted auditors. Wireless Communications and Mobile Computing, 2020, 1–13. https://doi.org/10.1155/2020/8841711.
  • Liang, W., Yang, Y., Yang, C., Hu, Y., Xie, S., Li, K. C., & Cao, J. (2022). PDPChain: A consortium blockchain-based privacy protection scheme for personal data. IEEE Transactions on Reliability, 1–13. https://doi.org/10.1109/TR.2022.3190932
  • Liang, W., Zhang, D., Lei, X., Tang, M., Li, K. C., & Zomaya, A. Y. (2020). Circuit copyright blockchain: Blockchain-based homomorphic encryption for IP circuit protection. IEEE Transactions on Emerging Topics in Computing, 9(3), 1410–1420. https://doi.org/10.1109/TETC.2020.2993032
  • Liu, B., Lu, J., & Yip, J. (2009). XML data integrity based on concatenated hash function. arXiv preprint arXiv:0906.3772.
  • Liu, C., Ranjan, R., Yang, C., Zhang, X., Wang, L., & Chen, J. (2014). MuR-DPA: Top-down levelled multi-replica merkle hash tree based secure public auditing for dynamic big data storage on cloud. IEEE Transactions on Computers, 64(9), 2609–2622. https://doi.org/10.1109/TC.2014.2375190
  • Sebé, F., Domingo-Ferrer, J., Martinez-Balleste, A., Deswarte, Y., & Quisquater, J. J. (2008). Efficient remote data possession checking in critical information infrastructures. IEEE Transactions on Knowledge and Data Engineering, 20(8), 1034–1038. https://doi.org/10.1109/TKDE.2007.190647
  • Shen, J., Shen, J., Chen, X., Huang, X., & Susilo, W. (2017). An efficient public auditing protocol with novel dynamic structure for cloud data. IEEE Transactions on Information Forensics and Security, 12(10), 2402–2415. https://doi.org/10.1109/TIFS.2017.2705620
  • Susilo, W., Li, Y., Guo, F., Lai, J., & Wu, G. (2022). Public cloud data auditing revisited: Removing the tradeoff between proof size and storage cost. In Computer Security–Esorics 2022: 27th European Symposium on Research in Computer Security, Copenhagen, Denmark, September 26–30, 2022, Proceedings, Part II (pp. 65–85).
  • Tian, G., Hu, Y., Wei, J., Liu, Z., Huang, X., Chen, X., & Susilo, W. (2021). Blockchain-based secure deduplication and shared auditing in decentralized storage. IEEE Transactions on Dependable and Secure Computing, 19(6), 3941–3954. https://doi.org/10.1109/TDSC.2021.3114160
  • Tian, H., Chen, Y., Chang, C. C., Jiang, H., Huang, Y., Chen, Y., & Liu, J. (2015). Dynamic-hash-table based public auditing for secure cloud storage. IEEE Transactions on Services Computing, 10(5), 701–714. https://doi.org/10.1109/TSC.2015.2512589
  • Velte, T., Velte, A., & Elsenpeter, R. (2010). Cloud computing: A practical approach. New York, NY: McGraw-Hill.
  • Wang, B., Li, B., & Li, H. (2014). Oruta: Privacy-preserving public auditing for shared data in the cloud. IEEE Transactions on Cloud Computing, 2(1), 43–56. https://doi.org/10.1109/TCC.2014.2299807
  • Wang, C., Chow, S. S., Wang, Q., Ren, K., & Lou, W. (2011). Privacy-preserving public auditing for secure cloud storage. IEEE Transactions on Computers, 62(2), 362–375. https://doi.org/10.1109/TC.2011.245
  • Wang, C., Wang, Q., Ren, K., & Lou, W. (2009). Ensuring data storage security in cloud computing. In Quality of Service, 2009. IWQOS. 17th International Workshop on (pp. 1–9).
  • Wang, C., Wang, Q., Ren, K., & Lou, W. (2010a). Privacy-preserving public auditing for data storage security in cloud computing. In 2010 Proceedings IEEE Infocom (pp. 1–9).
  • Wang, H., Qin, H., Zhao, M., Wei, X., Shen, H., & Susilo, W. (2020). Blockchain-based fair payment smart contract for public cloud storage auditing. Information Sciences, 519, 348–362. https://doi.org/10.1016/j.ins.2020.01.051
  • Wang, Q., Wang, C., Ren, K., Lou, W., & Li, J. (2010b). Enabling public auditability and data dynamics for storage security in cloud computing. IEEE Transactions on Parallel and Distributed Systems, 22(5), 847–859. https://doi.org/10.1109/TPDS.2010.183
  • Wu, J., Li, Y., Wang, T., & Ding, Y. (2019). CPDA: A confidentiality-preserving deduplication cloud storage with public cloud auditing. IEEE Access, 7, 160482–160497. https://doi.org/10.1109/Access.6287639
  • Xiao, J., Huang, H., Wu, C., Chen, Q., & Huang, Z. (2022). A blockchain-based collaborative auditing scheme for cloud storage. In International Symposium on Cyberspace Safety and Security (pp. 147–159).
  • Xie, M., Yu, Y., Chen, R., Li, H., Wei, J., & Sun, Q. (2022). Accountable outsourcing data storage atop blockchain. Computer Standards & Interfaces, 82, 103628. https://doi.org/10.1016/j.csi.2022.103628
  • Xu, Y., Ding, L., Cui, J., Zhong, H., & Yu, J. (2020). PP-CSA: A privacy-preserving cloud storage auditing scheme for data sharing. IEEE Systems Journal, 15(3), 3730–3739. https://doi.org/10.1109/JSYST.2020.3018692
  • Yaling, Z., & Li, S. (2020). Dynamic flexible multiple-replica provable data possession in cloud. In 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP) (pp. 291–294).
  • Yan, H., & Gui, W. (2021). Efficient identity-based public integrity auditing of shared data in cloud storage with user privacy preserving. IEEE Access, 9, 45822–45831. https://doi.org/10.1109/ACCESS.2021.3066497
  • Yang, X., Wang, M., Li, T., Liu, R., & Wang, C. (2020). Privacy-preserving cloud auditing for multiple users scheme with authorization and traceability. IEEE Access, 8, 130866–130877. https://doi.org/10.1109/ACCESS.2020.3009539
  • Yuan, H., Chen, X., Wang, J., Yuan, J., Yan, H., & Susilo, W. (2020). Blockchain-based public auditing and secure deduplication with fair arbitration. Information Sciences, 541, 409–425. https://doi.org/10.1016/j.ins.2020.07.005
  • Zhang, C., Xu, Y., Hu, Y., Wu, J., Ren, J., & Zhang, Y. (2021). A blockchain-based multi-cloud storage data auditing scheme to locate faults. IEEE Transactions on Cloud Computing, 10(4), 2252–2263. https://doi.org/10.1109/TCC.2021.3057771
  • Zhang, Y., Xu, C., Lin, X., & Shen, X. (2019). Blockchain-based public integrity verification for cloud storage against procrastinating auditors. IEEE Transactions on Cloud Computing, 9(3), 923–937. https://doi.org/10.1109/TCC.2019.2908400
  • Zhu, Y., Hu, H., Ahn, G. J., & Yu, M. (2012). Cooperative provable data possession for integrity verification in multicloud storage. IEEE Transactions on Parallel and Distributed Systems, 23(12), 2231–2244. https://doi.org/10.1109/TPDS.2012.66
  • Zhu, Y., Wang, H., Hu, Z., Ahn, G. J., Hu, H., & Yau, S. S. (2011). Dynamic audit services for integrity verification of outsourced storages in clouds. In Proceedings of the 2011 ACM Symposium on Applied Computing (pp. 1550–1557).
  • Zikratov, I., Kuzmin, A., Akimenko, V., Niculichev, V., & Yalansky, L. (2017). Ensuring data integrity using blockchain technology. In 2017 20th Conference of Open Innovations Association (Fruct) (pp. 534–539).