EDAspy.optimization.univariate package

Submodules

EDAspy.optimization.univariate.umda_binary module

class EDAspy.optimization.univariate.umda_binary.UMDAd(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, alpha: float = 0.5, vector: array | None = None, lower_bound: float = 0.2, upper_bound: float = 0.8, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None)[source]

Bases: EDA

Univariate marginal Estimation of Distribution algorithm binary. New individuals are sampled from a univariate binary probabilistic model. It can be used for hyper-parameter optimization or to optimize a function.

UMDA [1] is a specific type of Estimation of Distribution Algorithm (EDA) where new individuals are sampled from univariate binary distributions and are updated in each iteration of the algorithm by the best individuals found in the previous iteration. In this implementation each individual is an array of 0s and 1s so new individuals are sampled from a univariate probabilistic model updated in each iteration. Optionally it is possible to set lower and upper bound to the probabilities to avoid premature convergence.

This approach has been widely used and shown to achieve very good results in a wide range of problems such as Feature Subset Selection or Portfolio Optimization.

Example

This short example runs UMDAd for a toy example of the One-Max problem.

from EDAspy.benchmarks import one_max
from EDAspy.optimization import UMDAc, UMDAd

def one_max_min(array):
    return -one_max(array)

umda = UMDAd(size_gen=100, max_iter=100, dead_iter=10, n_variables=10)
# We leave bound by default
eda_result = umda.minimize(one_max_min, True)

References

[1]: Mühlenbein, H., & Paass, G. (1996, September). From recombination of genes to the estimation of distributions I. Binary parameters. In International conference on parallel problem solving from nature (pp. 178-187). Springer, Berlin, Heidelberg.

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.univariate.umda_categorical module

class EDAspy.optimization.univariate.umda_categorical.UMDAcat(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, possible_values: List | array, frequency: List | array, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None)[source]

Bases: EDA

Categorical version of UMDA algorithm. Each variable approximated an independent probability distribution where each variables can have more than two possible values (otherwise better to use binary version of UMDA).

Example

This example uses some uses a toy example to show how to use the UMDAcat implementation.

from EDAspy.optimization import UMDAcat

def categorical_cost_function(solution: np.array):
    cost_dict = {
        'Color': {'Red': 0.1, 'Green': 0.5, 'Blue': 0.3},
        'Shape': {'Circle': 0.3, 'Square': 0.2, 'Triangle': 0.4},
        'Size': {'Small': 0.4, 'Medium': 0.2, 'Large': 0.1}
    }
    keys = list(cost_dict.keys())
    choices = {keys[i]: solution[i] for i in range(len(solution))}

    total_cost = 0.0
    for variable, choice in choices.items():
        total_cost += cost_dict[variable][choice]

    return total_cost

variables = ['Color', 'Shape', 'Size']
possible_values = np.array([
    ['Red', 'Green', 'Blue'],
    ['Circle', 'Square', 'Triangle'],
    ['Small', 'Medium', 'Large']], dtype=object
)

frequency = np.array([
    [.33, .33, .33],
    [.33, .33, .33],
    [.33, .33, .33]], dtype=object
)

n_variables = len(variables)

umda_cat = UMDAcat(size_gen=10, max_iter=100, dead_iter=10, n_variables=n_variables, alpha=0.5,
                   frequency=frequency, possible_values=possible_values)

umda_cat_result = umda_cat.minimize(categorical_cost_function, True)
property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.univariate.umda_continuous module

class EDAspy.optimization.univariate.umda_continuous.UMDAc(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, alpha: float = 0.5, vector: array | None = None, lower_factor: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Univariate marginal Estimation of Distribution algorithm continuous. New individuals are sampled from a univariate normal probabilistic model. It can be used for hyper-parameter optimization or to optimize a function.

UMDA [1] is a specific type of Estimation of Distribution Algorithm (EDA) where new individuals are sampled from univariate normal distributions and are updated in each iteration of the algorithm by the best individuals found in the previous iteration. In this implementation each individual is an array of real data so new individuals are sampled from a univariate probabilistic model updated in each iteration. Optionally it is possible to set lower bound to the standard deviation of the normal distribution for the variables to avoid premature convergence.

This algorithms has been widely used for different applications such as in [2] where it is applied to optimize the parameters of a quantum paremetric circuit and is shown how it outperforms other approaches in specific situations.

Example

This short example runs UMDAc for a benchmark function optimization problem in the continuous space.

from EDAspy.benchmarks import ContinuousBenchmarkingCEC14
from EDAspy.optimization import UMDAc

n_vars = 10
benchmarking = ContinuousBenchmarkingCEC14(n_vars)

umda = UMDAc(size_gen=100, max_iter=100, dead_iter=10, n_variables=10, alpha=0.5,
             lower_bound=-100, upper_bound=100)

eda_result = umda.minimize(benchmarking.cec4, True)

References

[1]: Larrañaga, P., & Lozano, J. A. (Eds.). (2001). Estimation of distribution algorithms: A new tool for evolutionary computation (Vol. 2). Springer Science & Business Media.

[2]: Vicente P. Soloviev, Pedro Larrañaga and Concha Bielza (2022, July). Quantum Parametric Circuit Optimization with Estimation of Distribution Algorithms. In 2022 The Genetic and Evolutionary Computation Conference (GECCO). DOI: https://doi.org/10.1145/3520304.3533963

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

EDAspy.optimization.univariate.keda

class EDAspy.optimization.univariate.keda.UnivariateKEDA(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float, upper_bound: array | List[float] | float, alpha: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Univariate Kernel Density Estimation Algorithm (u_KEDA). New individuals are sampled from a KDE model. It can be used for hyper-parameter optimization or to optimize a function.

u_KEDA [1] is a specific type of Estimation of Distribution Algorithm (EDA) where new individuals are sampled from univariate KDEs and are updated in each iteration of the algorithm by the best individuals found in the previous iteration. In this implementation each individual is an array of real data so new individuals are sampled from a univariate probabilistic model updated in each iteration.

Example

This short example runs UMDAc for a benchmark function optimization problem in the continuous space.

from EDAspy.benchmarks import ContinuousBenchmarkingCEC14
from EDAspy.optimization import UnivariateKEDA

n_vars = 10
benchmarking = ContinuousBenchmarkingCEC14(n_vars)

keda = UnivariateKEDA(size_gen=100, max_iter=100, dead_iter=10, n_variables=10, alpha=0.5,
                      lower_bound=-100, upper_bound=100)
# We leave bound by default
eda_result = keda.minimize(benchmarking.cec4, True)

References

[1]: Larrañaga, P., & Lozano, J. A. (Eds.). (2001). Estimation of distribution algorithms: A new tool for evolutionary computation (Vol. 2). Springer Science & Business Media.

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

EDAspy.optimization.univariate.pbil

class EDAspy.optimization.univariate.pbil.PBIL(size_gen: int, max_iter: int, dead_iter: int, n_variables: int, lower_bound: array | List[float] | float | None = None, upper_bound: array | List[float] | float | None = None, alpha: float = 0.5, vector: array | None = None, lower_factor: float = 0.5, elite_factor: float = 0.4, disp: bool = True, parallelize: bool = False, init_data: array | None = None, w_noise: float = 0.5)[source]

Bases: EDA

Population-based incremental learning algorithm. New individuals are sampled from a univariate normal probabilistic model, where the mean of the Gaussian is slightly modified considering not only the best individuals found, but also the worst one. It can be used for hyper-parameter optimization or to optimize a given cost function.

PBIL [1] is a specific type of Estimation of Distribution Algorithm (EDA) where new individuals are sampled from univariate normal distributions and are updated in each iteration of the algorithm by the best individuals found in the previous iteration together with the worst one, in contrast to UMDA approach. In this implementation each individual is an array of real data so new individuals are sampled from a univariate probabilistic model updated in each iteration. Optionally it is possible to set lower bound to the standard deviation of the normal distribution for the variables to avoid premature convergence. Note that, here alpha regulates the importance of the previous generation versus the combination of bests and worst individuals, as follows:

\[\mu_{l+1} = (1 - \alpha) \mu_l + \alpha (x^{best, 1}_l + x^{best, 2}_l - x^{worst}_l)\]

Example

This short example runs PBIL for a benchmark function optimization problem in the continuous space.

from EDAspy.benchmarks import ContinuousBenchmarkingCEC14
from EDAspy.optimization import PBIL

n_vars = 10
benchmarking = ContinuousBenchmarkingCEC14(n_vars)

pbil = PBIL(size_gen=100, max_iter=100, dead_iter=10, n_variables=10, alpha=0.5,
            lower_bound=-100, upper_bound=100)

eda_result = pbil.minimize(benchmarking.cec4, True)

References

[1]: Sebag, M., & Ducoulombier, A. (1998, September). Extending population-based incremental learning to continuous search spaces. In International Conference on Parallel Problem Solving from Nature (pp. 418-427). Springer.

property init: GenInit

Returns the initializer used in the EDA implementation.

Returns:

initializer.

Return type:

GenInit

export_settings() dict

Export the configuration of the algorithm to an object to be loaded in other execution.

Returns:

configuration dictionary.

Return type:

dict

minimize(cost_function: callable, output_runtime: bool = True, ftol: float = 1e-08, *args, **kwargs) EdaResult

Minimize function to execute the EDA optimization. By default, the optimizer is designed to minimize a cost function; if maximization is desired, just add a minus sign to your cost function.

Parameters:
  • cost_function – cost function to be optimized and accepts an array as argument.

  • output_runtime – true if information during runtime is desired.

  • ftol – termination tolerance

Returns:

EdaResult object with results and information.

Return type:

EdaResult

property pm: ProbabilisticModel

Returns the probabilistic model used in the EDA implementation.

Returns:

probabilistic model.

Return type:

ProbabilisticModel

Module contents