天天看點

邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

邊緣智能相關論文

  • Fast Inference
  • Model Compression
  • Reduction in Communication
  • Federated Learning and Optimization
  • Federated Learning and security

  最近邊緣計算方面出了很多新的研究成果,在AI背景下,邊緣智能(Edge Intelligence)中新的研究相較于傳統的深度學習研究方向,關注不僅僅是模型的準确率,更多的在于實際部署中的可用性,例如響應度、效率、能耗、資源占用等方面。

  除此之外,聯邦學習是邊緣智能的一個很重要的方面,以及相關的安全性方面的研究。選取了近幾年頂會和引用量較高的幾篇論文分享↓↓↓↓

Fast Inference

  1. BranchyNet: Fast inference via early exiting from deep neural networks, ICPR, 2016,提出了一種在模型推斷時,估計準确率,達到準确率門檻值後,就近選擇快速跳出模型獲得結果的方法。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  2. Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge, ASPLOS, 2017, 在BranchyNet的基礎上,結合邊緣計算,采用雲邊端協同推斷的方法,将神經網絡在推斷部署時分散到雲邊端中,達到模型推斷的能耗和延遲要求。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  3. Edge Intelligence: On-Demand Deep Learning Model Co-Inference with Device-Edge Synergy, SIGCOMM, 2018, 在前兩篇BranchyNet和Neurosurgeon的基礎上,參考前一篇論文,使用線性回歸模型,來實作自動化的神經網絡拆分和自動化的BranchyNet,在滿足時延要求的同時達到最大化精度。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  4. Adaptive Neural Networks for Efficient Inference, ICML, 2017, 在BranchyNet的基礎上,BranchyNet是在一個網絡模型例如AlexNet中,提前退出,不執行後面幾層。而該文章中,是按自動機的方式連接配接幾個神經網絡模型,如果提前退出就不執行後面更複雜的神經網絡。例如在圖像分類中,用類似自動機的方法,入口進入AlexNet,結合BranchyNet,如果精度達到就退出,否則進入後面的GoogleNet和ResNet部分繼續執行。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  5. Opportunistic Learning: Budgeted Cost-Sensitive Learning from Data Streams, ICLR, 2019, 文章中提出某些機器學習場景和邊緣智能場景,例如對視訊流處理,特征擷取是需要付出代價的,使用強化學習的價值函數來擷取特征。文中還特别地指出BranchyNet中那樣利用熵估計精度不準确,提出了蒙特卡洛估計精度方法。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

Model Compression

  1. Efficient On-Device Models using Neural Projections

    , ICML, 2019, 提出了一種類似GAN和知識蒸餾的神經網絡投影的方法來壓縮模型結構,獲得更小的模型,該方法還為CNN、RNN做了定制。

    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  2. Deep Networks with Stochastic Depth, ECCV, 2016, 放在模型壓縮裡面,是因為别的都是加快模型推斷,這一篇加快了訓練過程,在訓練時使用skip的方法,把神經網絡看成一個個塊,像dropout一樣,對于一個塊,可以選擇跳過也可以選擇經過,可以加速訓練過程;在推斷時,使用完整的網絡。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  3. Expanding the Reach of Federated Learning by Reducing Client Resource Requirements, TODO, 2018, 提出了Federated Dropout的方法來訓練,每個參與者擁有的不是傳統聯邦學習架構下完整的model,而是dropout的一部分submodel。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  4. Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Inp, ICLR, 2019, 第一個提出用量化的角度來看待神經網絡推斷時的能耗的,分析了加速矩陣乘加運算應該要盡可能多的零參與運算,通過神經網絡權重的稀疏投影,以及首次提出的input mask方法可以對輸入本身進行“訓練”,達到能耗最低。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

Reduction in Communication

  1. Federated Learning: Strategies for Improving Communication Efficiency, NIPS, 2016, 首次提出用structured update和skewed update來進行聯邦學習的參數更新,減少通信量。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  2. 3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning, SysML, 2019, 提出了有損的梯度壓縮編碼方案,首次使用錯誤累計矩陣,最佳時可壓縮280x。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

Federated Learning and Optimization

  1. Privacy-Preserving Deep Learning,CCS, 2015

    文中提出了Distributed Selective SGD,許多聯邦學習文章都會引用該論文。

    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  2. Communication-Efficient Learning of Deep Network from Decentralized Data,AISTATS, 2017,聯邦學習開山之作之一,提出了FedAVG的參數聚合方法。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  3. FedMD: Heterogeneous Federated Learning via Model Distillation, NeuIPS WorkShop, 2019, 将知識蒸餾用于聯邦學習場景,傳遞的不是梯度,而是soft score。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  4. Federated Learning with Non-IID Data, CoRR, 2018, 通過實驗和推導,指出聯邦學習的每個參與者資料的Earth mover’s distance與non-iid以及性能下降之間有關聯,并提出通過分享資料降低non-iid程度。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  5. Overcoming Forgetting in Federaerd Learning on Non-IID Data, NeuIPS WorkShop, 2019, 提出使用持續學習中防止遺忘的EWC損失,緩解聯邦學習中non-iid問題。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  6. Variational Federated Multi-Task Learning, CoRR, 2019, 探索聯邦學習的架構下的多任務學習,使用了貝葉斯網絡的方法,很貝葉斯。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  7. Knowledge Extraction with No Observable Data, NeurIPS, 2019, 提出不需要觀測資料,也能從一個神經網絡模型中擷取到資訊,進行知識蒸餾,利用了類似GAN的方法,生成的資料表示很像聯邦學習中隐私洩露得到的資料均值。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  8. Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation, ICLR, 2019, 提出使用模型适配來提升對每個樣本的準确度,使用生成資料來進行回放,防止遺忘。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

Federated Learning and security

  1. Abnormal client Behavior Detection in Federated Learning, NeuIPS WorkShop, 2019, 第一篇提出來基于偵測的方法來偵測聯邦學習中異常梯度,使用自動編碼器,而不是基于防禦的方法。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  2. SignSGD with Majority Vote is Communication Efficient and Fault Tolerant, ICLR, 2019, 提出了使用多數投票的方式來進行梯度更新,梯度位隻有1位,降低了通信量,提升了魯棒性。同時文中還對Adam的收斂性進行了說明。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  3. Exploiting Unintended Feature Leakage in Collaborative Learning, S & P, 2019, 提出聯合學習中,與主任務無關的屬性會被洩露,例如年齡分類器會洩露性别,性别分類器會洩露人是不是戴眼鏡。
  4. Property inference attacks on fully connected neural networks using permutation invariant representations, CCS, 2018, 以往屬性推斷攻擊,很少針對神經網絡,因為神經網絡的可能性太大, 沒辦法窮舉這麼多,分别對這些模型*2,提供1個模型具有某屬性(例如作業系統具有熔斷漏洞),另1個不具有某屬性(不具有該漏洞)的模型,2個模型分别訓練得到梯度,梯度再分别作為label了的資料,送給屬性推斷攻擊分類器進行訓練。提出使用DeepSet的方法,将神經網絡n!的可能性進行了大幅壓縮。
  5. How To Backdoor Federated Learning, CoRR, 2018, 提出用模型毒化攻擊聯邦學習,比資料毒化更加有效,在文中,模型毒化沒有很有效的防禦方法,突破了現有的許多防禦方法。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  6. Analyzing Federated Learning through an Adversarial Lens, ICML, 2019, 在上文的第1版的基礎上,對學習率進行了修正,提出應該用大的學習率,上文的第2版指出,本文用大的學習率的方法,會造成遺忘,使得需要多聯邦學習模型一直進行攻擊才能保持較高的後門任務準确率。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  7. Can You Really Backdoor Federated Learning, CoRR, 2019, 對聯邦學習後門攻擊進行了分析,有許多實驗執行個體,文中,指出差分隐私和基于梯度範數值的防禦,對後門攻擊有一定效果。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  8. Latent Backdoor Attacks on Deep Neural Networks, CCS, 2019, 提出了一種隐後門的攻擊,攻擊後的模型并沒有後門任務的label,把後門藏在中間表示,隐藏性更好,現有方法無法防禦。模型一旦被遷移學習,一旦遷移後的模型裡面有後門任務的标簽,後門任務就會被激活。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
  9. Deep leakage from gradients, NeurIPS 2019, 從梯度洩露像素級别的圖檔,僅僅通過梯度,恢複原有圖檔,是以聯邦學習中應該還是會進行安全聚合。
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security
    邊緣智能相關論文(Edge Intelligence & Federated Learning)Fast InferenceModel CompressionReduction in CommunicationFederated Learning and OptimizationFederated Learning and security

繼續閱讀