EDAspy.optimization.multivariate package

Submodules

EDAspy.optimization.multivariate.speda module

class EDAspy.optimization.multivariate.speda.SPEDA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, l: float = 10, alpha: float = 0.5, disp: bool = True, black_list: list | None = None, white_list: list | None = None, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Semiparametric Estimation of Distribution Algorithm [1]. This type of Estimation-of-Distribution Algorithm uses a semiparametric Bayesian network [2] which allows dependencies between variables which have been estimated using KDE with variables which fits a Gaussian distribution. By this way, it avoid the assumption of Gaussianity in the variables of the optimization problem. This multivariate probabilistic model is updated in each iteration with the best individuals of the previous generations.

SPEDA has shown to improve the results for more complex optimization problem compared to the univariate EDAs that can be found implemented in this package, multivariate EDAs such as EGNA, or EMNA, and other population-based algorithms. See [1] for numerical results.

Example

This example uses some very well-known benchmarks from CEC14 conference to be solved using a Semiparametric Estimation of Distribution Algorithm (SPEDA).

from EDAspy.optimization import SPEDA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

benchmarking = ContinuousBenchmarkingCEC14(10)

speda = SPEDA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10, lower_bound=-100,
              upper_bound=100, l=10)

eda_result = speda.minimize(benchmarking.cec14_4, True)

References

[1]: Vicente P. Soloviev, Concha Bielza and Pedro Larrañaga. Semiparametric Estimation of Distribution Algorithm for continuous optimization. 2022

[2]: Atienza, D., Bielza, C., & Larrañaga, P. (2022). PyBNesian: an extensible Python package for Bayesian networks. Neurocomputing, 504, 204-209.

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.multivariate.ebna module

class EDAspy.optimization.multivariate.ebna.EBNA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, possible_values: List | array, frequency: List | array, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None)[source]

Bases: EDA

Estimation of Bayesian Networks Algorithm. This type of Estimation-of-Distribution Algorithm uses a Discrete Bayesian Network from where new solutions are sampled. This multivariate probabilistic model is updated in each iteration with the best individuals of the previous generation. The main difference towards BOA is that a Bayesian Information Criterion Score is used for the structure learning process.

Example

This example uses some uses a toy example to show how to use the EBNA implementation.

from EDAspy.optimization import EBNA

def categorical_cost_function(solution: np.array):
    cost_dict = {
        'Color': {'Red': 0.1, 'Green': 0.5, 'Blue': 0.3},
        'Shape': {'Circle': 0.3, 'Square': 0.2, 'Triangle': 0.4},
        'Size': {'Small': 0.4, 'Medium': 0.2, 'Large': 0.1}
    }
    keys = list(cost_dict.keys())
    choices = {keys[i]: solution[i] for i in range(len(solution))}

    total_cost = 0.0
    for variable, choice in choices.items():
        total_cost += cost_dict[variable][choice]

    return total_cost

variables = ['Color', 'Shape', 'Size']
possible_values = np.array([
    ['Red', 'Green', 'Blue'],
    ['Circle', 'Square', 'Triangle'],
    ['Small', 'Medium', 'Large']], dtype=object
)

frequency = np.array([
    [.33, .33, .33],
    [.33, .33, .33],
    [.33, .33, .33]], dtype=object
)

n_variables = len(variables)

ebna = EBNA(size_gen=10, max_iter=10, dead_iter=10, n_variables=n_variables, alpha=0.5,
            possible_values=possible_values, frequency=frequency)

ebna_result = ebna.minimize(categorical_cost_function, True)

References

[1]: Larrañaga P, Lozano JA (2001) Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.multivariate.keda module

class EDAspy.optimization.multivariate.keda.MultivariateKEDA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, l: float = 10, alpha: float = 0.5, disp: bool = True, black_list: list | None = None, white_list: list | None = None, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Kernel Estimation of Distribution Algorithm [1]. This type of Estimation-of-Distribution Algorithm uses a KDE Bayesian network [2] which allows dependencies between variables which have been estimated using KDE. This multivariate probabilistic model is updated in each iteration with the best individuals of the previous generations.

Example

This example uses some very well-known benchmarks from CEC14 conference to be solved using a Kernel Estimation of Distribution Algorithm (KEDA).

from EDAspy.optimization import MultivariateKEDA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

benchmarking = ContinuousBenchmarkingCEC14(10)

keda = MultivariateKEDA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10,
                        lower_bound=-100, upper_bound=100, l=10)

eda_result = keda.minimize(benchmarking.cec14_4, True)

References

[1]: Vicente P. Soloviev, Concha Bielza and Pedro Larrañaga. Semiparametric Estimation of Distribution Algorithm for continuous optimization. 2022

[2]: Atienza, D., Bielza, C., & Larrañaga, P. (2022). PyBNesian: an extensible Python package for Bayesian networks. Neurocomputing, 504, 204-209.

EDAspy.optimization.multivariate.egna module

class EDAspy.optimization.multivariate.egna.EGNA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, black_list: list | None = None, white_list: list | None = None, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Estimation of Gaussian Networks Algorithm. This type of Estimation-of-Distribution Algorithm uses a Gaussian Bayesian Network from where new solutions are sampled. This multivariate probabilistic model is updated in each iteration with the best individuals of the previous generation.

EGNA [1] has shown to improve the results for more complex optimization problem compared to the univariate EDAs that can be found implemented in this package. Different modifications have been done into this algorithm such as in [2] where some evidences are input to the Gaussian Bayesian Network in order to restrict the search space in the landscape.

Example

This example uses some very well-known benchmarks from CEC14 conference to be solved using an Estimation of Gaussian Networks Algorithm (EGNA).

from EDAspy.optimization import EGNA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

benchmarking = ContinuousBenchmarkingCEC14(10)

egna = EGNA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10,
            lower_bound=-100, upper_bound=100)

eda_result = egna.minimize(benchmarking.cec14_4, True)

References

[1]: Larrañaga, P., & Lozano, J. A. (Eds.). (2001). Estimation of distribution algorithms: A new tool for evolutionary computation (Vol. 2). Springer Science & Business Media.

[2]: Vicente P. Soloviev, Pedro Larrañaga and Concha Bielza (2022). Estimation of distribution algorithms using Gaussian Bayesian networks to solve industrial optimization problems constrained by environment variables. Journal of Combinatorial Optimization.

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.multivariate.emna module

class EDAspy.optimization.multivariate.emna.EMNA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, lower_factor: float = 0.5, upper_factor: float = 100, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Estimation of Multivariate Normal Algorithm (EMNA) [1] is a multivariate continuous EDA in which no probabilistic graphical models are used during runtime. In each iteration the new solutions are sampled from a multivariate normal distribution built from the elite selection of the previous generation.

In this implementation, as in EGNA, the algorithm is initialized from a uniform sampling in the landscape bound you input in the constructor of the algorithm. If a different initialization_models is desired, then you can override the class and this specific method.

This algorithm is widely used in the literature and compared for different optimization tasks with its competitors in the EDAs multivariate continuous research topic.

Example

This example uses some very well-known benchmarks from CEC14 conference to be solved using an Estimation of Multivariate Normal Algorithm (EMNA).

from EDAspy.optimization import EMNA
from EDAspy.benchmarks import ContinuousBenchmarkingCEC14

benchmarking = ContinuousBenchmarkingCEC14(10)

emna = EMNA(size_gen=300, max_iter=100, dead_iter=20, n_variables=10, lower_bound=-100,
            upper_bound=100)

eda_result = emna.minimize(cost_function=benchmarking.cec14_4)

References

[1]: Larrañaga, P., & Lozano, J. A. (Eds.). (2001). Estimation of distribution algorithms: A new tool for evolutionary computation (Vol. 2). Springer Science & Business Media.

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.multivariate.boa module

class EDAspy.optimization.multivariate.boa.BOA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, possible_values: List | array, frequency: List | array, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None)[source]

Bases: EDA

Bayesian Optimization Algorithm. This type of Estimation-of-Distribution Algorithm uses a Discrete Bayesian Network from where new solutions are sampled. This multivariate probabilistic model is updated in each iteration with the best individuals of the previous generation. The main difference towards EBNA is that a Bayesian Dirichlet score is used for the structure learning process.

Example

This example uses some uses a toy example to show how to use the BOA implementation.

from EDAspy.optimization import BOA

def categorical_cost_function(solution: np.array):
    cost_dict = {
        'Color': {'Red': 0.1, 'Green': 0.5, 'Blue': 0.3},
        'Shape': {'Circle': 0.3, 'Square': 0.2, 'Triangle': 0.4},
        'Size': {'Small': 0.4, 'Medium': 0.2, 'Large': 0.1}
    }
    keys = list(cost_dict.keys())
    choices = {keys[i]: solution[i] for i in range(len(solution))}

    total_cost = 0.0
    for variable, choice in choices.items():
        total_cost += cost_dict[variable][choice]

    return total_cost

variables = ['Color', 'Shape', 'Size']
possible_values = np.array([
    ['Red', 'Green', 'Blue'],
    ['Circle', 'Square', 'Triangle'],
    ['Small', 'Medium', 'Large']], dtype=object
)

frequency = np.array([
    [.33, .33, .33],
    [.33, .33, .33],
    [.33, .33, .33]], dtype=object
)

n_variables = len(variables)

boa = BOA(size_gen=10, max_iter=10, dead_iter=10, n_variables=n_variables, alpha=0.5,
            possible_values=possible_values, frequency=frequency)

boa_result = boa.minimize(categorical_cost_function, True)

References

[1]: Larrañaga P, Lozano JA (2001) Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

Module contents