Using SPEDA for continuous optimization

In this notebook we use the SPEDA approach to optimize a wellknown benchmark. Note that SPEDA learns and sampled a semiparametric Bayesian network in each iteration. Import the algorithm and the benchmarks from EDAspy.

from EDAspy.optimization import SPEDA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

We will be using a benchmark with 10 variables.

n_vars = 10
benchmarking = ContinuousBenchmarkingCEC14(n_vars)

We initialize the EDA with the following parameters:

speda = SPEDA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10,
              lower_bound=-60, upper_bound=60, l=10)

eda_result = speda.minimize(benchmarking.cec14_4, True)

We plot the best cost found in each iteration of the algorithm.

plt.figure(figsize = (14,6))

plt.title('Best cost found in each iteration of EDA')
plt.plot(list(range(len(eda_result.history))), eda_result.history, color='b')
plt.xlabel('iteration')
plt.ylabel('MAE')
plt.show()

Let’s visualize the BN structure learnt in the last iteration of the algorithm.

from EDAspy.optimization import plot_bn

plot_bn(speda.pm.print_structure(), n_variables=n_vars)