天天看點

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

A Dynamic Multiobjective Evolutionary Algorithm Based on Decision Variable Classification

  • 此篇文章為

    Liang Z , Wu T , Ma X , et al. A Dynamic Multiobjective Evolutionary Algorithm Based on Decision Variable Classification[J]. IEEE Transactions on Cybernetics, 2020, PP(99):1-14.

    的論文學習筆記,隻供學習使用,不作商業用途,侵權删除。并且本人學術功底有限如果有思路不正确的地方歡迎批評指正!

Abstract

  • 目前許多動态多目标進化算法DMOEAS主要是将多樣性引入或預測方法與傳統的多目标進化算法相結合來解決動态多目标問題DMOPS。其中種群的多樣性和算法的收斂性的平衡十分重要。
  • 本文提出了基于決策變量分類的動态多目标優化算法DMOEA-DCV
  • DMOEA-DCV将在靜态優化階段将決策變量分成兩到三個不同的組,并且在相應階段分别進行改變。在靜态優化階段,兩個不同分組的決策向量使用不同的交叉算子以加速收斂保持多樣;在改變回報階段,DMOEA-DVC分别采用維護、預測和多樣性引進政策重新初始化決策變量組。
  • 最後在33個DMOP benchmark上和先進的DMOEA進行了比較,取得了更優異的結果。

Introduction

  • DMOPs就是解決随時間變化的多目标優化問題。傳統的DMOEA算法強調能夠随着環境的改變動态響應,主流的算法可以分為 多樣性引進政策diversity introduction approaches[1],[19]-[24]和預測方法 prediction approaches.[25]-[33]

對于

diversity introduction approaches

的方法:

  • 優點:Diversity introduction approaches introduce a certain proportion of randomized or mutated individuals into the evolution population once a change occurs to increase the population diversity.

    The increase of diversity can facilitate the algorithms to better adapt to the new environment.

  • 缺點:However, since these algorithms mainly rely on the static evolution search to find the optimal solution set after diversity introduction,

    the convergence might be slowed down.

對于

Prediction approaches

的方法:

  • 優點:在變換的環境中提升收斂性能
  • 缺點:預測模型性能受限

目前存在的問題

  • 目前的方法不care決策變量之間的差異,使用相同的方式進行考慮,對于平衡種群的多樣性和收斂性效率低。

提出基于變量分類的DMOEA(DMOEA-DVC)

  • DMOEA-DVC特點在于集合了

    diversity introduction, fast prediction models和decision variable classification methods

    , 多樣性引入和決策變量分類可以抵消彼此固有的缺陷。
  • 靜态優化時采用變量分類政策,改變相應階段時對不同的變量采用不同的進化算子和響應機制。

對比算法

  • DNSGA-II-B [1]
  • population prediction strategy (PPS) [25]
  • MOEA/D-KF [26]
  • steady state and generational evolutionary algorithm (SGEA) [33]
  • Tr-DMOEA [35]
  • DMOEA-CO [52]

benchmark

  • five FDA benchmarks [4]
  • three dMOP benchmarks [19]
  • two DIMP benchmarks [41]
  • nine JY benchmarks [42]
  • 14 newly developed DF benchmarks [43].

貢獻

  • 兩種決策變量分類方法
  • 靜态優化時,對兩種變量采用不同的進化方式
  • change responce時,使用保持,預測和引入多樣性混合響應政策以應對三種不同的決策變量。

BACKGROUND AND RELATED WORK

Basics of DMOP

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

動态帕累托最優解和動态帕累托最優解集

  • 基本上就是加上了時序t的概念的支配
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

多最優變量與單最優變量

  • 注意這裡的exist和any的表述!!
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
  • 換言之,如果一個決策變量是單最優變量,那麼PS()(t)上的這個變量具有相同的值,而如果這個決策變量是多最優變量,那麼PS(t)上的這個變量具有不同的值

DMOP問題的類型

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

DMOEA

  • DMOEA基本上可以分為兩類:引入多樣性diversity introduction 和 基于預測predictionbased approaches.

diversity introduction

  • diversity introduction 考慮的是當環境改變發生時,引入随機的或變異的個體來避免種群多樣性的損失。
  • Deb[1] DMOEAs : proposed two DMOEAs (DNSGAII- A and DNSGA-II-B) based on NSGA-II [7]. Once a change is detected, DNSGA-II-A randomly reinitializes 20% of the individuals, while DNSGA-II-B randomly mutates 20% of the individuals.
  • Goh and Tan [19] dCOEA: introduced a competitive-cooperative coevolutionary algorithm (dCOEA) where some new individuals are generated randomly to enhance the diversity of the population when the environment changes.
  • Helbig and Engelbrecht [20] HDVEPSO: proposed a heterogeneous dynamic vector-evaluated particle (非均勻動态矢量評估粒子) swarm optimization (HDVEPSO) algorithm by combining heterogeneous particle swarm optimization (HPSO) [21], [22] and dynamic vector-evaluated particle swarm optimization (DVEPSO) [23].

    HDVEPSO randomly reinitializes 30% of the swarm particles after the objective function changes.

  • Martínez-Peñaloza and Mezura-Montes [24] combined generalized

    differential evolution (DE)

    along with an

    artificial immune system

    to solve DMOP (Immune-GDE3).
  • 總結,使種群不易陷入局部最優并且易于實作。

predictionbased approaches

  • 為了使種群易于适應變換後的新的環境,提出了預測的方法
  • Zhou et al. [25] presented a

    PPS

    to divide the population into a

    center point

    and a

    manifold

    中心和支管. The proposed method uses an autoregression (AR) 自回歸 model to locate the next center point and uses the previous two

    consecutive manifolds

    連續不斷的支管 to predict the next manifold. The predicted center point and manifold make up a new population more suitable to the new environment.
  • Muruganantham et al. [26] applied a

    Kalman filter [44]

    卡爾曼濾波器 in the decision space to predict the new Pareto-optimal set. They also proposed a scoring scheme to decide the predicting proportion. 評分機制
  • Hatzakis and Wallace [27] 自回歸和邊界點
  • Peng [28] 改進 exploration 和 exploitation 算子
  • Wei and Wang [29] hyperrectangle prediction (超矩形預測)
  • Ruan [30] gradual search (逐漸搜尋)
  • Wu et al. [31] reinitialized individuals in the orthogonal direction (正交方向) to the predicted direction of the population in change response.
  • Ma et al. [32] utilized a

    simple linear model

    to generate the population in the new environment.
  • Jiang and Yang [33] introduced an SGEA, which guides the search of the solutions by a moving direction from the

    centroid of the nondominated solution set

    to `the centroid of the entire population. The step size of the search is defined as the Euclidean distance between the centroids of the nondominated solution set at time steps (t−1) and t.
  • 總結:預測的方法提高了算法的收斂效率
  • 本文通過結合多樣性引入和基于快速預測的方法來利用兩者的優點,提出了一種增強的變化響應政策。

Decision Variable Classification Methods

  • 無論是多樣性引入還是預測方法都可以被視為在搜尋最優解時的機率模型。大多數現有的DMOEA都假定所有決策變量都在相同的機率分布下。但是,在實際的DMOP中,決策變量的機率分布可能會發生很大變化。通過決策變量分類,可以将決策變量分為不同的組,然後可以将特定的機率搜尋模型應用于相應的變量組以獲得更好的解決方案。

基于擾動的變量分類

在靜态問題中

  • 例如,在[45]-[48]中通過

    決策變量擾動

    實作了決策變量分類。決策變量擾動會産生大量個體進行分類,并成比例地消耗大量适應性評估。該政策對于靜态MOP效果很好,在靜态MOP中,決策變量的類别不變,并且僅需要分類一次。

在動态問題中

  • 決策變量的分類經常變化,是以需要更多次數的分類和評價次數
  • 很少有方法将決策變量分類的方法運用到動态問題中,現有的靜态問題的方法不太合适。
  • Woldesenbet和Yen [51]通過對目标空間變化的平均敏感度來區分決策變量,并以此為基礎來重新安置個體。該方法對于動态單目标優化問題效果很好,但是不适用于DMOP。
  • Xu[52]提出了一種針對DMOP的協作式協同進化算法,其中決策變量被分解為兩個子元件,即相對于環境變量t不可分離和可分離的變量。應用兩個種群分别協同優化兩個子元件。文獻[52]中提出的算法在基于環境敏感性可分解決策變量的DMOP上具有優越性,但是,在許多DMOP中可能并非如此。

本文提出的方法

  • 在本文中,我們提出了一種适用于大多數DMOP的更通用的決策變量分類方法。所提出的方法沒有使用額外的目标評估或疊代積累來收集統計資訊就實作了準确的分類。特别地,決策變量分類方法使用決策變量和目标函數之間的統計資訊,該統計資訊在每次環境變化之後的第一次疊代中可用,也就是說,不需要消耗額外的适應性評估。值得強調的是,本文提出的分類是區分DMOP中決策變量分布(即單個最優值或多個最優最優值)的首次嘗試。從搜尋開始,就采用了不同的政策來采樣不同的決策變量。這樣,決策變量可以在疊代過程中盡可能服從PS(t)的分布,進而更好地覆寫和逼近PS(t)。

提出的架構和實作

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

變量分類Decision Variable Classification

  • 文中提出的變量分類分為兩種,一種對應算法1 line 6 ,靜态優化時的變量分類,一種對應算法1 line9 ,動态優化時的變量分類。

Decision Variable Classification in Static Optimization

首先變量可以被分為single optimal收斂 和multi-optimal多樣

  • 一句話概括一下:對于single optimal的次元應該和最好的個體越近越好,而multi-optimal的次元則應當越遠越好。否則易陷入局部最優,并且在疊代早期精英政策會導緻multi-optimal的次元也向最優解靠攏,影響多樣性。

區分single optimal收斂 和multi-optimal變量

  • 如果目标函數在一個變量上沖突,則這個變量是multi-optimal的
  • In DMOP, the objective functions could conflict with each other on some decision variables [46], [53]. If two objective functions conflict on a decision variable, the decision variable is deemed to have multiple optimal values.

具體操作:

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

(自我思考)這裡需要考慮一個問題,就是當一個變量進行改變時,其他變量也不是相同的,如何去單獨考慮一個變量對于整體的變化,如果變量的次元大,如何證明是這個變量而不是其他變量的變化導緻目标函數的變化呢?這裡解釋是在DOMP中,一般隻有一個變量是multi的,而其餘都是single的,這個解釋覺得還可以進一步完善和改進。但是作為節省計算資源而言,這的确是一個比較折中的辦法

使用SRCC來評價變量和目标函數之間的關系

大體思想是,将種群中所有個體的這個變量從低到高進行排序,然後對種群中這些個體的單個目标值進行進行排序,這兩個排序的rank內插補點就是這個個體的d(i,j,k).然後通過d(i,j,k)來計算r,而當r大于或者小于一個門檻值的時候,就意味着變量i和目标j具有正相關或者負相關性

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

算法流程

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

Decision Variable Classification in Change Response

  • 在DMOP中,決策變量可分為similar ,predictable 和 unpredictable
    • similar 變量:在連續兩次環境變化中沒有什麼變化,環境變化時,不需要重新初始化
    • predictable 變量:在環境變化中,預測可以帶來顯著提升,環境變化時,需要通過預測的方式重新初始化
    • unpredictable 變量:預測幾乎帶來不了提升,環境變化時,通過引入多樣性重新初始化
  • 非參數的t檢驗被用于評價決策變量改變和環境的相關性 ,對于目前第i個變量與上個世代的第i個變量之間的關系可以表示為:
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

使用t檢驗區分變量-相似性與非相似性變量

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

對于非相似性變量,判斷其是否是可以預測的變量

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
  • x_center表示種群中所有個體的決策變量的平均值,x_trial[i]表示種群中x_center第i個決策變量經過預測的方法變化後的結果而其餘的變量保持不變,如果x_trial[i]能夠支配x_center則表示這個第i個決策變量是可以預測的,否則則認為第i個決策變量是不能預測的。

對于某些問題,預測的方法不可行

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

算法流程

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

環境選擇

  • DMOEA-DVC和SGEA[33]使用相同的選擇方法,适應度函數F(i)表示支配個體xi的個體數目
  • 如果存檔A中個體少于N則從種群中挑選最好的個體進P’,如果剛好相等,就将A中所有個體轉入P’,如果存檔A中個體多了就從種群中挑選最遠的個體進P’.

改變響應

  • 對于環境改變後的響應,對于DMOEA-DVC中分類出的三種變量,分别使用maintenance保持,diversity introduction 多樣性引入和prediction approach 預測方式三種對決策變量進行處理。

maintenance 保持

  • 如果變量是相似的則保持不變

Diversity Introduction

  • 如果變量不可預測
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

使用kalman進行預測

  • 如果對預測方式不清楚可以參考[25]和[33]
  • 如果變量可以預測
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

生成子代

  • 論文認為SBX生成的子代會離父代很近,是以适合single-optimal的決策變量,而DE生成的子代裡父代很遠,是以适合multi-optimal的決策變量
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻
  • 總體流程
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

個體更新規則

  • 詳見[33]
論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

計算複雜度

論文研讀-基于變量分類的動态多目标優化算法AbstractIntroductionBACKGROUND AND RELATED WORKDMOEA提出的架構和實作參考文獻

參考文獻

[1] K. Deb, U. V. Rao, and S. Karthik, “Dynamic multi-objective optimization and decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling,” in Proc. EMO, vol. 4403, 2007, pp. 803–817.

[4] M. Farina, K. Deb, and P. Amato, “Dynamic multi-objective optimization problems: Test cases, approximations, and applications,” IEEE Trans. Evol. Comput., vol. 8, no. 5, pp. 425–442, Oct. 2004.

[19] C.-K. Goh and K. C. Tan, “A competitive-cooperative coevolutionary paradigm for dynamic multi-objective optimization,” IEEE Trans. Evol. Comput., vol. 13, no. 1, pp. 103–127, Feb. 2009.

[20] M. Helbig and A. P. Engelbrecht, “Heterogeneous dynamic vector evaluated particle swarm optimization for dynamic multi-objective optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2014, pp. 3151–3159.

[21] A. P. Engelbrecht, “Heterogeneous particle swarm optimization,” in Proc. Int. Conf. Swarm Intell., 2010, pp. 191–202.

[22] M. A. M. de Oca, J. Peña, T. Stützle, C. Pinciroli, and M. Dorigo, “Heterogeneous particle swarm optimizers,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2009, pp. 698–705.

[23] M. Greeff and A. P. Engelbrecht, “Solving dynamic multi-objective problems with vector evaluated particle swarm optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2008, pp. 2917–2924.

[24] M. Martínez-Peñaloza and E. Mezura-Montes, “Immune generalized differential evolution for dynamic multi-objective optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2015, pp. 846–851.

[25] A. Zhou, Y. Jin, and Q. Zhang, “A population prediction strategy for evolutionary dynamic multi-objective optimization,” IEEE Trans. Cybern., vol. 44, no. 1, pp. 40–53, Jan. 2014.

[26] A. Muruganantham, K. C. Tan, and P. Vadakkepat, “Evolutionary dynamic multi-objective optimization via Kalman filter prediction,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 2862–2873, Dec. 2016.

[27] I. Hatzakis and D. Wallace, “Dynamic multi-objective optimization with evolutionary algorithms: A forward-looking approach,” in Proc. ACM Conf. Genet. Evol. Comput., 2006, pp. 1201–1208.

[28] Z. Peng, J. Zheng, J. Zou, and M. Liu, “Novel prediction and memory strategies for dynamic multi-objective optimization,” Soft Comput., vol. 19, no. 9, pp. 2633–2653, 2014.

[29] J. Wei and Y. Wang, “Hyper rectangle search based particle swarm algorithm for dynamic constrained multi-objective optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC), 2012, pp. 259–266.

[30] G. Ruan, G. Yu, J. Zheng, J. Zou, and S. Yang, “The effect of diversity maintenance on prediction in dynamic multiobjective optimization,” Appl. Soft Comput., vol. 58, pp. 631–647, Sep. 2017.

[31] Y. Wu, Y. Jin, and X. Liu, “A directed search strategy for evolutionary dynamic multi-objective optimization,” Soft Comput., vol. 19, no. 11, pp. 3221–3235, 2015.

[32] Y. Ma, R. Liu, and R. Shang, “A hybrid dynamic multi-objective immune optimization algorithm using prediction strategy and improved differential evolution crossover operator,” in Proc. Neural Inf. Process., vol. 7063, 2011, pp. 435–444.

[33] S. Jiang and S. Yang, “A steady-state and generational evolutionary algorithm for dynamic multi-objective optimization,” IEEE Trans. Evol. Comput., vol. 21, no. 1, pp. 65–82, Feb. 2017.

[35] M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen, “Transfer learning based dynamic multiobjective optimization algorithms,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 501–514, Aug. 2018, doi: 10.1109/TEVC.2017.2771451.

[41] W. Koo, C. Goh, and K. C. Tan, “A predictive gradient strategy for multi-objective evolutionary algorithms in a fast changing environment,” Memetic Comput., vol. 2, no. 2, pp. 87–110, 2010.

[42] S. Jiang and S. Yang, “Evolutionary dynamic multi-objective optimization: Benchmarks and algorithm comparisons,” IEEE Trans. Cybern., vol. 47, no. 1, pp. 198–211, Jan. 2017.

[43] S. Jiang, S. Yang, X. Yao, and K. C. Tan, “Benchmark functions for the CEC’2018 competition on dynamic multiobjective optimization,” Centre Comput. Intell., Newcastle Univ., Newcastle upon Tyne, U.K., Rep. TRCEC2018, 2018.

[44] A. Muruganantham, Y. Zhao, S. B. Gee, X. Qiu, and K. C. Tan, “Dynamic multi-objective optimization using evolutionary algorithm with Kalman filter,” Proc Comput. Sci., vol. 24, pp. 66–75, Nov. 2013.

[45] X. Zhang, Y. Tian, R. Cheng, and Y. Jin, “A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization,” IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 97–112, Feb. 2018.

[46] X. Ma et al., “A multiobjective evolutionary algorithm based on decision variable analysis for multiobjective optimization problems with largescale variables,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 275–298, Apr. 2016.

[47] C. K. Goh, K. C. Tan, D. S. Liu, and S. C. Chiam, “A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design,” Eur. J. Oper. Res., vol. 202, no. 1, pp. 42–54, 2010.

[48] M. N. Omidvar, X. Li, Y. Mei, and X. Yao, “Cooperative co-evolution with differential grouping for large scale optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 378–393, Jun. 2014.

[49] J. Sun and H. Dong, “Cooperative co-evolution with correlation identification grouping for large scale function optimization,” in Proc. Int. Conf. Inf. Sci. Technol. (ICIST), 2013, pp. 889–893.

[50] M. N. Omidvar, X. Li, and X. Yao, “Cooperative co-evolution with delta grouping for large scale non-separable function optimization” in Proc. IEEE Congr. Evol. Comput., 2010, pp. 1762–1769.

[51] Y. G. Woldesenbet and G. G. Yen, “Dynamic evolutionary algorithm with variable relocation,” IEEE Trans. Evol. Comput., vol. 13, no. 3, pp. 500–513, Jun. 2009.

[52] B. Xu, Y. Zhang, D. Gong, Y. Guo, and M. Rong, “Environment sensitivity-based cooperative co-evolutionary algorithms for dynamic multi-objective optimization,” IEEE/ACM Trans. Comput. Biol. Bioinform., vol. 15, no. 6, pp. 1877–1890, Nov./Dec. 2017.

[53] S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Trans. Evol. Comput., vol. 10, no. 5, pp. 477–506, Oct. 2006.

[54] B. Student, “The probable error of a mean,” Biometrika, vol. 6, no. 1, pp. 1–25, 1908.

[55] D. Wang, H. Zhang, R. Liu, W. Lv, and D. Wang, “t-test feature selection approach based on term frequency for text categorization,” Pattern Recognit. Lett., vol. 45, no. 1, pp. 1–10, 2014.

[56] B. Chen, W. Zeng, Y. Lin, and D. Zhang, “A new local search based multi-objective optimization algorithm,” IEEE Trans. Evol. Comput., vol. 19, no. 1, pp. 50–73, Feb. 2015.

[57] C. Chen and L. Y. Tseng, “An improved version of the multiple trajectory search for real value multi-objective optimization problems,” Eng. Optim., vol. 46, no. 10, pp. 1430–1445, 2014.

[58] C. Rossi, M. Abderrahim, and J. C. Díaz, “Tracking moving optima using Kalman-based predictions,” Evol. Comput., vol. 16, no. 1, pp. 1–30, 2008.