982
Views
0
CrossRef citations to date
0
Altmetric
Research Article

FedG2L: a privacy-preserving federated learning scheme base on “G2L” against poisoning attack

ORCID Icon &
Article: 2197173 | Received 28 Nov 2022, Accepted 22 Mar 2023, Published online: 06 Apr 2023

Abstract

Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.

1. Introduction

The federated learning (FL) uploads the gradient and aggregate global model can effectively solve the issue of “Data Island”, which is caused by the difficulty of data sharing (difficulty of trust or communication between users) (Achituve et al., Citation2021). However, the centralised characteristics of the traditional FL aggregation process make it easy to lose data or even tamper data which is caused by a single-point failure (Feng et al., Citation2021a). Although the decentralisation and tamper-proof characteristics of blockchain provide a potential solution to the above problems (Feng et al., Citation2021b; Jia et al., Citation2021), it is difficult to guarantee the authenticity of data before blockchain and easy to leak data privacy after blockchain, which limits the wide application of blockchain in FL. For example, by obtaining different local gradients recorded on a block, an attacker can launch gradient leakage attacks to infer different local data or even the original data (Wang et al., Citation2022a; Wei & Liu, Citation2021), and purposely reduce the global model performance by uploading malicious gradients (Chen et al., Citation2021a; Weerasinghe et al., Citation2021; Wen et al., Citation2021a).

To protect gradient privacy and resist poisoning attacks, a large number of privacy-preserving federated learning schemes against poisoning attacks have been proposed (Liu et al., Citation2021; Ma et al., Citation2022; Miao et al., Citation2022). The main idea of the above schemes is to protect gradient privacy by using additive or multiplicative homomorphic cryptography to aggregate the encrypted gradients. In the aggregation process, homomorphic cryptography is further used to eliminate malicious gradients by calculating the similarity between different gradients. The gradient with low similarity is judged as a malicious gradient.

The above methods can effectively resist the poisoning attack while protecting gradient privacy. However, it is assumed that there is a benign benchmark gradient in the system, that is, the cloud platform can provide a benign benchmark gradient to compare with the local gradient and eliminate or assign lower weights to the gradients with low similarity. However, this assumption does not hold in blockchain systems. The blockchain has no initial fully trusted node for data collection and gradient calculation. It is difficult to obtain benign benchmark gradients for similarity calculation. In addition, the performance of the global model with poisoning attacks at different aggregation stages is not considered in the existing schemes. Specifically, due to the large change of gradients in the initial stage of global model aggregation, partial poisoning attacks can be eliminated during global model aggregation. In the later stage of global model aggregation, due to the small change of gradients and tendency to convergence, it is difficult to eliminate the poisoning attack in this process. We also further verify the above conclusions in the experiments shown in Figure . The results show that with the increase of epochs, the performance of the global model degrades more when the poisoning is late. When the model is trained more than 70 epochs, the global model performance is degraded by at least 55% due to the poisoning attack. Therefore, how eliminating late-aggregation poisoning attacks is another issue to be solved urgently.

To solve the above issues, we propose a global to local (G2L) based anti-poisoning privacy-preserving federated consensus scheme. Specifically, we construct a G2L two-stage model aggregation framework. We introduce the Paillier homomorphic cryptosystem to design a secure consensus algorithm based on gradient similarity (Paillier, Citation1999), which eliminates malicious gradients while protecting the privacy of benign gradients. Then, generative adversarial networks (GAN) are introduced to generate more data and further converge the final model based on the global update (Cai et al., Citation2022; Kim et al., Citation2022). Since the process of model convergence only uses the local data of the node, the influence of the poisoning attacks in the later aggregation stage on the global model performance is eliminated. The main contributions of this paper are as follows.

  • In the global aggregation stage, a secure consensus algorithm (SecPBFT) is proposed based on gradient similarity. The locally trained gradient of all nodes is broadcast in each round of consensus, and the density-Based spatial clustering of applications with noise (DBSCAN) based security clustering algorithm is dynamically designed to divide the gradient to eliminate the malicious gradient in each round of consensus. During this process, intermediate key parameters, such as the gradient of all nodes will not be leaked.

  • In the local convergence stage, we introduce GAN to generate more local data for the nodes. On the basis of the global aggregated model, we further train the local model until convergence. The influence of the poisoning attacks in the later stage of global aggregation is eliminated, and the accuracy of the final model is significantly improved.

  • We rigorously prove the correctness and security of the proposed scheme. The conducted experiments on two open datasets show that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.

The gradient leakage attacks and poisoning attacks generally exist in scenarios that require machine learning model training, including multi-entity scenarios that need to communicate and cooperate with each other, such as finance, medical care, the Internet of vehicles, and drones. For example, it is necessary to accurately judge the condition of diabetes patients on the basis of protecting their privacy, without being disturbed by malicious poisoning. The part of this paper has been presented at the 8th International Symposium on Security and Privacy in Social Networks and Big Data (SocialSec2022) (Xu & Li, Citation2022). In this paper, we further propose a two-stage federated learning architecture from Global to local. Based on the design of SecPBFT, we consider the impact of different periods of poisoning on model performance and introduce GAN in the local convergence stage to completely eliminate late poisoning attacks (Section 1). We update the latest related works of FL against poisoning attacks in Section 2 and supplement the basic concepts of GAN in Section 3.4, where, the PBFT and the DBSCAN are also redescribed in Section 3. In Section 4.2, we refine the threat model of the proposed scheme and discuss it from two aspects: gradient leakage attack and poisoning attack. In Section 5, we further introduce the GAN algorithm in the local convergence stage to further converge the local model based on the global aggregation. This process can completely eliminate the influence of the late poisoning attack because it completely trains the final model in the local node. Finally, we supplement the rigorous proof of the correctness of the proposed scheme in Section 6 and conduct detailed experiments on the proposed scheme in Section 7.

The remainder of this paper is organised as follows. Section 2 describes the related works and compares them with our proposal. Section 3 outlines the mathematical notation and preliminaries. The problem formulation of this paper is introduced in Section 4. FedG2L is proposed in detail in Section 5. Section 6 and Section 7 present the theoretical analysis and performance evaluation. Finally, a conclusion is given in Section 8.

2. Related works

2.1. Anti-poisoning attacks

To eliminate the influence of poisoning attacks, Levine and Feizi (Citation2020) propose two provable new defenses against poisoning attacks, a certified defense against a general poisoning threat model and an authentication defense against tag flip poisoning attacks. Chen et al. (Citation2021b) proposed an attack-agnostic defense against poisoning attacks. The implementation can distinguish the poisoned samples from the clean samples without any explicit knowledge of machine learning algorithms or poisoning attack types, thus effectively detecting poisoned data under poisoning attacks. Wen et al. (Citation2021b) proposed a new anti-poisoning attack algorithm. Incorporating the probability estimation concept of clean data points into the algorithm significantly improved the most advanced defense algorithm, TRIM (Jagielski et al., Citation2018). The implementation effectively reduces errors caused by poisoned datasets.

Furthermore, To eliminate the influence of poisoning attacks in FL, Shejwalkar and Houmansadr (Citation2021) design a divide-and-conquer defense scheme. The principle component of the update set is first calculated. Then, the main component is used to calculate the scalar product for each submission model. Finally, part of the maximum prediction submission model update is deleted. A customer-based defense (FL-WBC) is proposed by Sun et al. (Citation2021). The FL-WBC can determine the parameters of the long-lasting attack effect of the parameter attack on the parameter. In addition, they obtained certification robust assurance and convergence guarantee by Fedavg after applying FL-WBC. A defense scheme called CONTRA is proposed by Awan et al. (Citation2021), such as labeling and backdoor attacks in the FL system. CONTRA realises a similar measurement based on the string to determine the credibility of the local model parameters in each round. The reputation of the dynamic promotion or punishment of a single customer is based on the dynamic promotion of each and the historical model of the global model.

2.2. Privacy-Preserving FL scheme against poisoning attack

Currently, some studies aim to preserve the privacy of public gradients while resisting poisoning attacks, and an FL framework with enhanced privacy (PEFL) is proposed in Liu et al. (Citation2021). The framework is based on homomorphic encryption to ensure the integrity and availability of the global model. However, if the server in the framework is compromised, the decryption key will be leaked directly, causing the privacy of all intermediate parameters to be leaked. Schneider et al. (Citation2023) has confirmed the above challenges and carried out a detailed analysis. Ma et al. (Citation2022) designed a combined privacy protection solution based on a dual trap server. They propose a secure computational method to measure the distance between two encrypted gradients to eliminate malicious gradients. This method provides a feasible solution for detecting poisoning attacks. However, in this scheme, once the servers collude, the privacy of local users will be leaked. In order to reduce the risk faced by the server due to the privacy leakage problem, there have been previous works on blockchain-based FL proposed (Hou et al., Citation2021; Li et al., Citation2022; Yang et al., Citation2023). Miao et al. (Citation2022) proposes a blockchain-based secure Byzantine Robust Federated Learning (PBFL) scheme. The cosine similarity is used to judge the malicious gradient uploaded. By introducing the CKKS fully homomorphic encryption system, the computational cost of the multi-dimensional gradient encryption process is reduced to a certain extent. Nonetheless, this work assumes the presence of benign baseline gradients in the system.

In recent years, although there have been some works to resist the poisoning attack in federated learning. But there are still some challenges of benchmark gradient dependence (Li et al., Citation2023; Xiao et al., Citation2023).

In summary, the current research on privacy-preserving federated learning against poisoning attacks has made great efforts. However, how to ensure the authenticity (without poisoning attacks) of on-chain data and eliminate the impact of late-aggregation poisoning attacks on the global model performance are two main challenges in decentralised FL research. Finally, we summarise the existing poisoning attacks defense schemes in terms of different application scenarios and requirements, as shown in the following Table .

Table 1. Summary of existing poisoning attack defense schemes in FL.

3. Preliminaries

3.1. Notations

To facilitate the description, we first define some symbols and give the corresponding description in Table .

Table 2. Notations.

3.2. Practical byzantine fault tolerance (PBFT)

This scheme is based on the PBFT consensus for global gradient aggregation. As shown in Figure , the PBFT algorithm is mainly composed of 5 steps based on the dilemma of Byzantine's general as follows.

Figure 1. The PBFT consensus protocol.

Figure 1. The PBFT consensus protocol.

  1. The Leader is selected from the entire network node, which is responsible for generating new blocks.

  2. Each entity sends the message to the entire network. The Leader will need to be placed in multiple transactions in the new block, store them in the list, and broadcast the list to the entire network;

  3. Each node executes these transactions according to the ranking when receiving the transaction list. After completing all transactions, the new section of the new block will be calculated based on the transaction results and broadcast to the entire network;

  4. If a node receives digests equal to its own from 2f other nodes (f is the tolerable number of Byzantine nodes), it broadcasts a commit message to the entire network;

  5. If a node receives more than 2F+1 message, it will be responsible for the new block and its blockchain and status database.

In the PBFT algorithm, there is no need to wait for confirmation of a transaction. If a block is approved by the system through the PBFT algorithm, then this block will be the final block and will not be revoked. Since each node reaches a consensus at the same time, this means that the blockchain maintained with PBFT is not prone to fork, so there is no need to wait for confirmation to ensure that the current block is in the longest chain. In addition, PBFT does not require mining, and each consensus process requires less power than PoW (Büyüközkan & Tüfekçi, Citation2021).

3.3. DBSCAN algorithm

To eliminate malicious gradient and ensure the performance of the global model, we develop a secure aggregated gradient consensus process based on the DBSCAN algorithm (Ester et al., Citation1996), which is received a lot of attention in federated learning (Agrawal et al., Citation2021; de Sousa Pacheco et al., Citation2021; Ge et al., Citation2022; Wang et al., Citation2022b). The DBSCAN algorithm will be a cluster from the maximum density connection from the density relationship. As shown in Figure , the DBSCAN algorithm can be divided into two steps.

Figure 2. The process of DBSCAN. (a) First step (b) Second step.

Figure 2. The process of DBSCAN. (a) First step (b) Second step.

  1. Forming temporary clusters. First, all samples are scanned. If the number of samples is more than or equal to MinPoints within R, the sample is included in the core list and added to the corresponding temporary cluster.

  2. Merging the temporary clusters. Then, a temporary cluster corresponds to the point and the current temporary cluster is to obtain a new temporary cluster when the point is the core point. Repeat this operation until each point is not in the core list or the point of direct density has been concentrated in the temporary cluster. The remaining temporary clusters continue the same merger operation until all temporary clusters are treated.

3.4. Generative adversarial network

Generative Adversarial Network (GAN) was proposed by Ian Goodfellow of the University of Montreal in 2014 (Goodfellow et al., Citation2020), which is a new type of unsupervised architecture. As shown in Figure , GANs consist of two independent networks that compete against each other. The first network is the discriminator we need to train to determine if the data is real. The second network is the generator, which generates random samples similar to real samples. Where the D is a discriminator to distinguish a series of different images, the G is a generator to generate fake images which are similar to real images to deceive D. During the training process, D will receive fake data generated by G and real data. The goal of D is to determine whether the image is real data or fake data. The parameters of both parties can be adjusted iteratively until the two networks reach balance and harmony.

Figure 3. The architecture of GAN.

Figure 3. The architecture of GAN.

Specifically, for the generator, the input will be an n-dimensional vector and the output will be a pixel-sized image. So first we need to get the input vector. Here, the input vector is considered to carry some information about the output, such as the number of digits in the handwriting, how sloped the handwriting is, and so on. We don't need any information about the output digit, just that it looks as close as possible to a real handwritten digit (enough to fool the discriminator). Therefore, we can use randomly generated vectors as input, where the random input should satisfy common distribution such as mean distribution, Gaussian distribution, etc. For the discriminator, the common discriminator input is a picture, and the output is the authenticity label of the picture.

Then, the generator and discriminator are trained.

  1. Initialize the parameters θD of the discriminator and θG of the the generator.

  2. Sample m samples (x1,x2,,xm). m noise samples (z1,z2,,zM) are sampled from the prior distribution noise and m generated samples are obtained through the generator. The generator G is fixed, and the discriminator D is trained to distinguish between real samples and generated samples as accurately as possible, and to distinguish between correct samples and generated samples as much as possible.

  3. After k times of updating the discriminator, a smaller learning rate is used to update the parameters of the generator, and the generator is trained to reduce the gap between the generated samples and the real samples as much as possible, which is equivalent to making the discriminator discriminate as wrong as possible.

  4. After many iterations, the ideal situation is that the discriminator cannot tell whether the sample is from the generator's output or the real output. In other words, the final sample discrimination probability is 0.5.

4. Problem formulation

4.1. System model

In this paper, we design a “G2L” FL consensus scheme with two stages. As shown in Figure , our proposed system model consists only of multiple different nodes. There are four entities (classes) in our proposed system. The roles of the different entities are as follows.

Figure 4. System model.

Figure 4. System model.

  • KGC: The KGC is an independent and well-known institution that generates public and private keys and distributes the key pairs of (pkc,skc) and (pkd,skd) for Leaders, Followers, and Data Owners, respectively.

  • Data Owner (DO): DOs upload sub-gradients in the aggregation phase and generate new data to update the final model in the local convergence phase.

  • Leader: The Leader obtains an encrypted gradient from DOs, interacts with the followers to verify the consistency of the request through the consensus agreement, and stores the encrypted gradient.

  • Follower: The Follower receives the gradients from the Leader. It aggregates and eliminates malicious gradients through the consensus process.

4.2. Threat model

In this paper, DOs obtain the best global model through shared sub-gradient iteration updates. During this process, DOs do not want to leak the privacy of their sub-gradient. Leaders and followers will perform malicious operations in this paper, such as tampering with gradients that need to be summarised. The malicious operations can also be executed by DOs to influence and even control the global model by uploading malicious gradients.

4.2.1. Gradient leakage attack.

Nodes extract different local gradients from blocks. Since the gradient is mapped from local data, malicious nodes can infer the data distribution of benign nodes or even the original data through the gradient.

4.2.2. Poisoning attack.

Malicious DOs can insert carefully crafted samples into the training set, upload malicious local gradients, and cause damage to the global gradient aggregation process to change the model behavior and degrade the model performance. In this paper, we assume that poisoning attacks will lead to a larger distance or a lower similarity between the locally trained gradient and the benign gradient, which is consistent with the existing scheme assumptions.

4.3. Design goals

FedG2L has two design goals as follows.

  • Correctness: The local model is updated after the global aggregation phase, the correct average gradient can still be obtained through l times average gradient iterative calculation.

  • Security: If a protocol is secure from an adversary, it does not allow participants to learn additional information. In this paper, it is necessary to prevent the performance of the global model from poisoning attacks. It is essential to ensure that the Leader consensus's gradient and other intermediate parameters are confidential.

5. Proposed scheme

In this section, we introduce FedG2L in detail, and the structure is shown in Figure , which consists of three stages: system initialization, model secure consensus, and local model convergence. Different from the existing centralised FL schemes, the gradients are shared by the consensus process in FedG2L without a central server. The average gradient is aggregated across multiple consensus iterations without benign gradients. In order to break through the limitation of “Data Island” and not be affected by poisoning attacks, we further use the GAN to generate local data in the later stage of global aggregation for training until the local model converges. Since this process uses local data without other node information, the malicious nodes cannot poison other node models through aggregation.

Figure 5. The process of FedG2L.

Figure 5. The process of FedG2L.

5.1. System initialization

The Paillier encryption is used forDOs, Leaders, and Followers to generate and release public/private key pairs (pkd,skd) and (pkc,skc). First, Leader and Followers generate random vectors RL and {Rj,j[1,l]}, respectively. These vectors are encrypted by the DO's public key pkd and broadcast to each other. Each node then sums the vectors they receive using Paillier's additive homomorphic property to obtain [[Rsum]]pkd. Finally, the consistency is confirmed based on the PBFT protocol, and the Leader confirms and records it in the new block. The encryption and decryption process of the gradient and random vector is briefly described as follows.

KeyGen. Given a security parameter τ, select two large primes p, q of τ bit and calculate N = pq, λ=lcm(p1,q1). Defining a function L(x)=x1N, randomly select a generator g of ZN to make gcd(L(gλmodN2),N)=1. Then the public key is (g,N), the private key is λ. The plaintext space and ciphertext space of the cryptosystem in this paper are ZN and ZN2. In the paper, the encryption and decryption algorithms are denoted as E and D, respectively.

Encrypt. For plaintext mZN, select a random number rZN and calculate ciphertext. (1) E(m)=gmrNmodN2.(1)

Decrypt. The decryption of ciphertext CZN2 as follows, (2) m=L(cλmodN2)L(gλmodN2)modN.(2)

Evaluate. The cryptosystem has additive homomorphism. (3) E(m1)E(m2)=gm1r1Ngm2r2NmodN2=gm1+m2(r1r2)NmodN2=E(m1+m2).(3) The cryptosystem also has multiplicative homomorphism. (4) E(m1)m2=E(m1m2).(4)

5.2. Model secure consensus

DO download random vector [[Rsum]]pkd, and use their private key skd to decrypt it. Among them, the Leader uses the additive homomorphism of the Pailliar cryptosystem to sum the encrypted random vectors uploaded by different nodes to obtain the global random vectors [[Rsum]]pkd. Then, We can combine Equation (Equation2) to obtain the plaintext form Rsum of the blinding factor. (5) Rsum=L(i=1nRiλmodN2)L(gλmodN2)modN.(5) (6) w(l)=w(l1)ηGk.(6) The sub-gradients, which are trained by local nodes are blinded by using Rsum which is uploaded by different DOs to the Leader node for consensus. The malicious gradient is eliminated through the interactions between the Leader and the Followers. The blinded sub-gradients Gk+Rsum,k[1,n]} are uploaded to Leader. We further eliminate the global blinding factor Rsum to obtain the global gradient Gk and update the model using the parameters w(l1) and learning rate η trained in the previous iteration.

The consensus process is shown in Section 3.2, and we design the SecPBFT algorithm during this process to further eliminate malicious gradients. The details of SecPBFT is shown in Algorithm 1.

We first mark all sub-gradients as unvisited states. A sub-gradient is randomly selected as a visited state and added into the C1 cluster. To obtain the temporary cluster set W, the adjacent gradients of Gk+Rsum within radius ε are searched. Then, the potentials of all clusters are iteratively computed and the cluster with the largest potential is determined as a benign gradient cluster set. All sub-gradients in the benign gradient cluster set are aggregated to obtain the average gradient Gavg(l)+RsumFedavgCmax, and encrypted by using the public key pkd. Finally, the Gavg(l) is decrypted and eliminates random vectors to update the global model.

Among them, parameter selection on DBSCAN refers to the latest related works (Boonchoo et al., Citation2019; Chen et al., Citation2022). It should be noted that in this paper, due to the assumption that there are no more than 50% malicious nodes. Therefore, MinPts=0.5n in this paper.

5.3. Local model convergence

To completely eliminate the influence of the later poisoning attacks on the final model, different nodes obtain the final model by train the local data after the global aggregation of certain epochs. However, the “Data Island” issue further limits the performance of training the model with only the local raw data. Therefore, we introduce the ACGAN (Odena et al., Citation2017) and improve its loss function to further generalise the generated local data, break through the limitation of “Data Island”, and train the final model with high performance.

To generate local data, a node first builds a generator and a discriminator, respectively. The Label and the Noise are concatenated and sent to the generator. The Label and the local real data are concatenated and sent to the discriminator, where the Label is the label information of the training data and the Noise is a random vector with a fixed distribution. The output of the generator is the newly generated image. The output of the discriminator is divided into two parts, one is the judgment of the authenticity of the newly generated image, and the other is the classification result of the newly generated image.

The discriminator, not only hopes to classify correctly but also hopes to distinguish the true and false data correctly. For the generator, we also want to be able to classify correctly, but we want the discriminator to be unable to distinguish fake data correctly.

Thus, the loss functions of the G and the D are as follows. (7) LG=LLLS.(7) (8) LD=LL+LS.(8) where the true-false judgment loss LS and the classification loss LL are calculated as follows. (9) LS=E[logP(S=real|xreal)]+E[logP(S=fake|xfake)].(9) (10) LL=E[logP(L=lab|xreal)]+E[logP(L=lab|xfake)].(10)

To improve the generalization performance of the final model, we need to maximise the generator loss function and minimise the discriminator loss function to ensure that the new data generated fits the desired distribution of the original data as well as possible. We further introduce the loss function of the node task model as the loss bias of the discriminator, where the discriminator is to minimise the loss function LD. (11) LD=LL+LS+Lnode.(11) Finally, the new data x generated by the generator is used to update the final node task model based on w(l). The details of model convergence can be shown in Algorithm 2.

6. Theoretical analysis

6.1. Security

We will strictly prove security by simulating examples. The principle of simulating examples is to compare the security of the actual protocol with the security of the ideal secure multi-party computation protocol. If the actual computation protocol does not leak more information than the ideal computation protocol, then the actual computation protocol is proven to be secure.

Theorem 6.1

Algorithm SecPBFT is secure and can resist gradient leakage attacks.

Proof.

A simulator SimLeader is built to generate computationally indistinguishable vectors of the gradient. During actual execution, the View of the Leader is ViewLeader(,)={,R,Gk+Rsum(k[1,n])}. The simulator SimLeader randomly selects the gradient vector{Gk+Rsum,k[1,n]}. Then, the subsequent process is performed based on{Gk+Rsum,k[1,n]}, that is, the information sequence generated by the Leader during the simulation process is SimLeader(,λ)={,R,Gk+Rsum,(k[1,n])}.

In ViewLeader and SimLeader, {Gk+Rsum,(k[1,n])} and {Gk+Rsum,(k[1,n])} are computationally indistinguishable from the randomness of random vectors R and R, Rsum and Rsum. That is ViewLeader(,)cSimLeader(,λ).

The security analysis of Followers is similar to Leader's.

6.2. Correctness

Theorem 6.2

Algorithm SecPBFT can correctly calculate the global model w(l).

Proof.

In the l-th epoch of the global model, Clienti,i[1,n] can obtain the gradient Gavg(l)+Rsum which is encrypted by pkd. According to the additive homomorphism of the Paillier cryptosystem, the decrypted global average gradient Gavg(l) can be obtained by Algorithm 1, i.e. Gavg(l)=Gavg(l)+RsumRsum. Therefore, the aggregated global model w(l) of l-th epoch can be obtained, i.e. w(l)=w(l1)ηGavg(l)

6.3. Efficiency analysis

This paper assumes that the gradient vector is γ dimension. In this paper, since data births occur in plaintext at nodes, the computational overhead is only generated during the consensus of the SecPBFT algorithm. Leader and Followers encrypt the final vector Gavg(l)+Rsum, and both need to perform γ encryption operation. The Paillier encryption and decryption require two modular exponentiation operations once. Then the FedG2L needs (n+1) γ modular exponentiation operations. Its computational complexity is O(nγ).

Furthermore, the communication complexity is only generated by the SecPBFT. Therefore, the communication complexity of the aggregation process is O(n2).

7. Performance evaluation

In this section, we first introduce the experimental setting, including the benchmarks model, datasets, and experimental environment. Then, we analyse the Effectiveness of FedG2L and existing works.

7.1. Experiment setting

7.1.1. Benchmarks model.

In this scheme, the node task is to classify pictures. For ease of analysis, we choose the traditional convolutional neural networks as the benchmark model. In a real deployment, different Benchmark models can be selected depending on the specific node task.

7.1.2. Datasets.

In this article, the MNIST and EMNIST datasets are used to evaluate the performance of FedG2L with poisoning attacks and gradient leakage attacks. The National Standard Technology Research Institute (NIST) released the MNIST dataset, including black and white images of numbers 0 to 9, which is the benchmark dataset of machine learning (LeCun et al., Citation1998). EMNIST is an extended dataset of MNIST, which contains handwritten letters, numbers, and other symbols (Cohen et al., Citation2017), with handwritten numbers.

7.1.3. Experimental environment.

The experimental device refers to a personal desktop PC with Zhikee5 2660 CPU, 256GB of 1333MHz memory, GTX 2080 GPU, and 500GB SSD. The operating system is Windows 11. The code language is Python 3.7 which includes Pytorch code libraries.

7.2. Effectiveness analysis

To verify the effectiveness of FedG2L, we compare FedG2L with existing schemes as shown in Figure. With the increase in the number of poisoning, the accuracy of all defense measures will decrease. This is due to the fact that the ratio of poisoning increases, and the valuable data in training decreases, resulting in the decline of the accuracy of the existing model. Our scheme is slightly lower than some existing schemes in the global aggregation stage, this is because our scheme has no benign benchmark gradient as a reference compared with the existing schemes, and the screening and elimination of malicious gradients are realised by the unsupervised SecPBFT algorithm.

With the epochs increasing, the model accuracy of the existing scheme becomes stable, and the accuracy of our scheme still has a 3% improvement after epoch>70, as shown in Figure (a). This is due to the fact that the global gradient is used to update the model in the early stage of training, which further ensures the breakthrough of “Data Islands” of local nodes. In addition, even if there are 50% malicious nodes, the accuracy of our scheme is at least 11% higher than that of the existing schemes. This is because FedG2L uses the locally generated data in the late training stage to fundamentally eliminate the influence of the poisoning behavior of malicious nodes in the aggregation process on the final model.

Figure 6. Comparison of accuracy with different epochs and byzantine percentage. (a) Epochs. (b) Byzantine percentage.

Figure 6. Comparison of accuracy with different epochs and byzantine percentage. (a) Epochs. (b) Byzantine percentage.

Figure 7. The ASR with different Byzantine percentages on 2 datasets. (a) Performance on MNIST dataset. (b) Performance on EMNIST dataset.

Figure 7. The ASR with different Byzantine percentages on 2 datasets. (a) Performance on MNIST dataset. (b) Performance on EMNIST dataset.

To verify the defense capability of FedG2L, we further analyse the attack success rate (ASR) with different Byzantine percentages on two datasets. With the addition of Byzantine percentages, the ASR is significantly improved, which is shown in Figure . When the Byzantine node percentage is higher than 10%, the ASR without defense on the two datasets is almost 100% (98% and 90%, respectively). In FedBC, the ASR has improved to a certain extent until the percentage of the Byzantine nodes is above 30%. The results show that the SecPBFT algorithm designed by the proposed scheme can effectively eliminate the malicious gradients and iteratively select the optimal benign gradients for aggregation. However, the ASR are more than 50% when the percentage of Byzantine nodes is equal to 50%. Since this paper uses locally generated data for training in the later stage of model training, FedG2L can still maintain ASR no more than 20% even if the Byzantine percentages reaches 50%.

7.3. Ablation studies

We compare the accuracy of our conference version scheme (FedBC) and the proposed scheme FedG2L in epoch[60,10] model, as shown in Figure . The model performance of the baseline scheme without defense decreases significantly after 70 epochs. Since the model tends to converge at this time, the gradient change is small, and the malicious gradient cannot be eliminated by simple aggregation. Even if the scheme FedBC is still a 14% decrease after epoch = 70. The scheme proposed in this paper is due to the model being trained with locally generated data after epoch = 70, it is not affected by malicious gradients, which can ensure that the accuracy of the final model of the node does not degrade.

Figure 8. Comparison of accuracy with different schemes.

Figure 8. Comparison of accuracy with different schemes.

In order to verify the improvement of the final global model performance by GAN-generated data. We tested it on the MNIST dataset and CIFAR-10 dataset, as shown in Figure . With the increase in the amount of generated data, the performance of the final model gradually increases, but when it is greater than a certain threshold, the performance oscillates or even decreases. This is due to overfitting of the model training after the generated data exceeds a certain threshold. Among them, this paper carries out on iid data, and the CIFAR-10 data set has only 6000 training data in one class. Therefore, each local node only has 600 training data. Even if more data is generated, it cannot describe the complete data distribution, so the performance of the model is limited.

Figure 9. Comparison of accuracy with different numbers of generated data on different datasets.

Figure 9. Comparison of accuracy with different numbers of generated data on different datasets.

8. Conclusion

In this paper, we propose a “G2L” based privacy-preserving federated learning scheme against poisoning attacks. First, the malicious gradients uploaded by malicious nodes are eliminated in the global aggregation process without a benign gradient. Furthermore, we propose a local convergence algorithm to further eliminate the effect of late poisoning attacks. Theoretical analysis demonstrates the security and correctness of this scheme. Additionally, the experimental results demonstrate that the FedG2L improves the model accuracy by at least 55% than no defense scheme and reduces the attack success rate by more than 60%. The proposed scheme can be widely used in distributed collaborative computing scenarios such as medical care, finance, and industrial Internet of things, and can break through the limitation of “data island” while protecting data privacy in different scenarios.

For future work, we will consider gradient privacy protection and poisoning attack defense on the non-iid dataset. For privacy-preserving, we expect that in the future, we can generate large amounts of data locally for different distributions. For defending against poisoning attacks, we expect that in the future, we can design a poisoning attack culling method that does not need to utilise the similarity measure.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Natural Science Foundation of China (62125205), the Natural Science Basic Research Program of Shaanxi Province (2022JQ-594).

References

  • Achituve, I., Shamsian, A., Navon, A., Chechik, G., & Fetaya, E. (2021). Personalized federated learning with gaussian processes. In Annual Conference on Neural Information Processing Systems (NeurIPS), 2021, (pp. 8392–8406). MIT Press.
  • Agrawal, S., Sarkar, S., Alazab, M., Maddikunta, P. K. R., Gadekallu, T. R., & Pham, Q. V. (2021). Genetic cfl: hyperparameter optimization in clustered federated learning. Computational Intelligence and Neuroscience, (2021). https://doi.org/10.1155/2021/7156420
  • Awan, S., Luo, B., & Li, F. (2021). Contra: defending against poisoning attacks in federated learning. In European symposium on research in computer security (pp. 455–475). Springer.
  • Boonchoo, T., Ao, X., Liu, Y., Zhao, W., Zhuang, F., & He, Q. (2019). Grid-based dbscan: indexing and inference. Pattern Recognition, 90(2019), 271–284. https://doi.org/10.1016/j.patcog.2019.01.034
  • Büyüközkan, G., & Tüfekçi, G. (2021). A decision-making framework for evaluating appropriate business blockchain platforms using multiple preference formats and vikor. Information Sciences, 571(2021), 337–357. https://doi.org/10.1016/j.ins.2021.04.044
  • Cai, J., Li, C., Tao, X., & Tai, Y.-W. (2022). Image multi-inpainting via progressive generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 978–987). IEEE.
  • Chen, H., Liang, M., Liu, W., Wang, W., & Liu, P. X. (2022). An approach to boundary detection for 3d point clouds based on dbscan clustering. Pattern Recognition, 124(2022), 108431. https://doi.org/10.1016/j.patcog.2021.108431
  • Chen, J., Zhang, X., Zhang, R., Wang, C., & Liu, L. (2021a). De-pois: an attack-agnostic defense against data poisoning attacks. IEEE Transactions on Information Forensics and Security, 16(2021), 3412–3425. https://doi.org/10.1109/TIFS.2021.3080522
  • Chen, J., Zhang, X., Zhang, R., Wang, C., & Liu, L. (2021b). De-pois: an attack-agnostic defense against data poisoning attacks. IEEE Transactions on Information Forensics and Security, 16(2021), 3412–3425. https://doi.org/10.1109/TIFS.2021.3080522
  • Cohen, G., Afshar, S., Tapson, J., & Van Schaik, A. (2017). Emnist: extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN) (pp. 2921–2926). IEEE.
  • de Sousa Pacheco, L., Rosário, D., Cerqueira, E., & Braun, T. (2021). Federated user clustering for non-iid federated learning. Electronic Communications of the EASST, 80(2021). https://doi.org/10.14279/tuj.eceasst.80.1130
  • Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). Density-based spatial clustering of applications with noise. In International conference on knowledge discovery and data mining (Vol. 240). https://doi.org/10.1109/ICSMC.2006.384571
  • Feng, C., Liu, B., Yu, K., Goudos, S. K., & Wan, S. (2021a). Blockchain-empowered decentralised horizontal federated learning for 5g-enabled uavs. IEEE Transactions on Industrial Informatics, 18(5), 3582–3592. https://doi.org/10.1109/TII.2021.3116132
  • Feng, Y., Zhang, W., Luo, X., & Zhang, B. (2021b). A consortium blockchain-based access control framework with dynamic orderer node selection for 5g-enabled industrial iot. IEEE Transactions on Industrial Informatics, 18(4), 2840–2848. https://doi.org/10.1109/TII.2021.3078183
  • Ge, N., Li, G., Zhang, L., & Liu, Y. (2022). Failure prediction in production line based on federated learning: an empirical study. Journal of Intelligent Manufacturing, 33(8), 2277–2294. https://doi.org/10.1007/s10845-021-01775-2
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139–144. https://doi.org/10.1145/3422622
  • Hou, D., Zhang, J., Man, K. L., Ma, J., & Peng, Z. (2021). A systematic literature review of blockchain-based federated learning: architectures, applications and issues. In 2021 2nd Information communication technologies conference (ICTC) (pp. 302–307). IEEE.
  • Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., & Li, B. (2018). Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In 2018 IEEE symposium on security and privacy (SP) (pp. 19–35). IEEE.
  • Jia, B., Zhang, X., Liu, J., Zhang, Y., Huang, K., & Liang, Y. (2021). Blockchain-enabled federated learning data protection aggregation scheme with differential privacy and homomorphic encryption in iiot. IEEE Transactions on Industrial Informatics, 18(6), 4049–4058. https://doi.org/10.1109/TII.2021.3085960
  • Kim, J., Choi, Y., & Uh, Y. (2022). Feature statistics mixing regularization for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11294–11303). IEEE.
  • LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. https://doi.org/10.1109/5.726791
  • Levine, A., & Feizi, S. (2020). Deep partition aggregation: Provable defense against general poisoning attacks. arXiv preprint arXiv:2006.14768.
  • Li, D., Han, D., Weng, T.-H., Zheng, Z., Li, H., Liu, H., Castiglione, A., & Li, K.-C. (2022). Blockchain for federated learning toward secure distributed machine learning systems: a systemic survey. Soft Computing, 26(9), 4423–4440. https://doi.org/10.1007/s00500-021-06496-5
  • Li, X., Qu, Z., Zhao, S., Tang, B., Lu, Z., & Liu, Y. (2023). Lomar: a local defense against poisoning attack on federated learning. IEEE Transactions on Dependable and Secure Computing, 20(1), 437–450. https://doi.org/10.1109/TDSC.2021.3135422
  • Liu, X., Li, H., Xu, G., Chen, Z., Huang, X., & Lu, R. (2021). Privacy-enhanced federated learning against poisoning adversaries. IEEE Transactions on Information Forensics and Security, 16(2021), 4574–4588. https://doi.org/10.1109/TIFS.2021.3108434
  • Ma, Z., Ma, J., Miao, Y., Li, Y., & Deng, R. H. (2022). Shieldfl: mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Transactions on Information Forensics and Security, 17(2022), 1639–1654. https://doi.org/10.1109/TIFS.2022.3169918
  • Miao, Y., Liu, Z., Li, H., Choo, K.-K. R., & Deng, R. H. (2022). Privacy-preserving byzantine-robust federated learning via blockchain systems. IEEE Transactions on Information Forensics and Security, 17(2022), 2848–2861. https://doi.org/10.1109/TIFS.2022.3196274
  • Odena, A., Olah, C., & Shlens, J. (2017). Conditional image synthesis with auxiliary classifier gans. In International conference on machine learning (pp. 2642–2651). PMLR.
  • Paillier, P. (1999). Public-key cryptosystems based on composite degree residuosity classes. In International conference on the theory and applications of cryptographic techniques (pp. 223–238). Springer.
  • Schneider, T., Suresh, A., & Yalame, H. (2023). Comments on “privacy-enhanced federated learning against poisoning adversaries”. IEEE Transactions on Information Forensics and Security, 2023(18), 1407–1409. https://doi.org/10.1109/TIFS.2023.3238544
  • Shejwalkar, V., & Houmansadr, A. (2021). Manipulating the byzantine: optimizing model poisoning attacks and defenses for federated learning. In NDSS. ISOC.
  • Sun, J., Li, A., DiValentin, L., Hassanzadeh, A., Chen, Y., & Li, H. (2021). Fl-wbc: enhancing robustness against model poisoning attacks in federated learning from a client perspective. In Advances in Neural Information Processing Systems (Vol. 34, pp. 12613–12624). MIT Press.
  • Wang, J., Guo, S., Xie, X., & Qi, H. (2022a). Protect privacy from gradient leakage attack in federated learning. In IEEE INFOCOM 2022-IEEE conference on computer communications (pp. 580–589). IEEE.
  • Wang, R., Wang, X., Chen, H., Picek, S., Liu, Z., & Liang, K. (2022b). Brief but powerful: Byzantine-robust and privacy-preserving federated learning via model segmentation and secure clustering. arXiv preprint arXiv:2208.10161.
  • Weerasinghe, S., Alpcan, T., Erfani, S. M., & Leckie, C. (2021). Defending support vector machines against data poisoning attacks. IEEE Transactions on Information Forensics and Security, 16(2021), 2566–2578. https://doi.org/10.1109/TIFS.10206
  • Wei, W., & Liu, L. (2021). Gradient leakage attack resilient deep learning. IEEE Transactions on Information Forensics and Security, 17(2021), 303–316. https://doi.org/10.1109/TIFS.2021.3139777
  • Wen, J., Zhao, B. Z. H., Xue, M., Oprea, A., & Qian, H. (2021a). With great dispersion comes greater resilience: efficient poisoning attacks and defenses for linear regression models. IEEE Transactions on Information Forensics and Security, 16(2021), 3709–3723. https://doi.org/10.1109/TIFS.2021.3087332
  • Wen, J., Zhao, B. Z. H., Xue, M., Oprea, A., & Qian, H. (2021b). With great dispersion comes greater resilience: efficient poisoning attacks and defenses for linear regression models. IEEE Transactions on Information Forensics and Security, 16(2021), 3709–3723. https://doi.org/10.1109/TIFS.2021.3087332
  • Xiao, X., Tang, Z., Li, C., Xiao, B., & Li, K. (2023). SCA: sybil-based collusion attacks of iiot data poisoning in federated learning. IEEE Transactions on Industrial Informatics, 19(3), 2608–2618. https://doi.org/10.1109/TII.2022.3172310
  • Xu, M., & Li, X. (2022). Fedbc: an efficient and privacy-preserving federated consensus scheme. In International symposium on security and privacy in social networks and big data (pp. 148–162). Springer.
  • Yang, Z., Shi, Y., Zhou, Y., Wang, Z., & Yang, K. (2023). Trustworthy federated learning via blockchain. IEEE Internet of Things Journal, 10(1), 92–109. https://doi.org/10.1109/JIOT.2022.3201117