Using EGNA for continuous optimization

In this notebook we use the EGNA approach to optimize a wellknown benchmark. Note that EGNA learns and sampled a GBN in each iteration. Import the algorithm and the benchmarks from EDAspy

from EDAspy.optimization import EGNA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

We will be using a benchmark with 10 variables.

n_vars = 10
benchmarking = ContinuousBenchmarkingCEC14(n_vars)

We initialize the EDA with the following parameters:

egna = EGNA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10,
            lower_bound=-60, upper_bound=60)

eda_result = egna.minimize(benchmarking.cec14_4, True)

We plot the best cost found in each iteration of the algorithm.

plt.figure(figsize = (14,6))

plt.title('Best cost found in each iteration of EDA')
plt.plot(list(range(len(eda_result.history))), eda_result.history, color='b')
plt.xlabel('iteration')
plt.ylabel('MAE')
plt.show()

Let’s visualize the BN structure learnt in the last iteration of the algorithm.

from EDAspy.optimization import plot_bn

plot_bn(egna.pm.print_structure(), n_variables=n_variables)