期刊论文详细信息
卷:10
Privacy Preservation for Federated Learning With Robust Aggregation in Edge Computing
Article
关键词: ARTIFICIAL-INTELLIGENCE;   
DOI  :  10.1109/JIOT.2022.3229122
来源: SCIE
【 摘 要 】

Benefiting from the powerful data analysis and prediction capabilities of artificial intelligence (AI), the data on the edge is often transferred to the cloud center for centralized training to obtain an accurate model. To resist the risk of privacy leakage due to frequent data transmission between the edge and the cloud, federated learning (FL) is engaged in the edge paradigm, uploading the model updated on the edge server (ES) to the central server for aggregation, instead of transferring data directly. However, the adversarial ES can infer the update of other ESs from the aggregated model and the update may still expose some characteristics of data of other ESs. Besides, there is a certain probability that the entire aggregation is disrupted by the adversarial ESs through uploading a malicious update. In this article, a privacy-preserving FL scheme with robust aggregation in edge computing is proposed, named FL-RAEC. First, the hybrid privacy-preserving mechanism is constructed to preserve the integrity and privacy of the data uploaded by the ESs. For the robust model aggregation, a phased aggregation strategy is proposed. Specifically, anomaly detection based on autoencoder is performed while some ESs are selected for anonymous trust verification at the beginning. In the next stage, via multiple rounds of random verification, the trust score of each ES is assessed to identify the malicious participants. Eventually, FL-RAEC is evaluated in detail, depicting that FL-RAEC has strong robustness and high accuracy under different attacks.

【 授权许可】

Free   

  文献评价指标  
  下载次数:0次 浏览次数:0次