2018-01-03

Solving a Tetris-puzzle with GAs

The puzzle in the picture comes from a Stackoverflow's question (https://stackoverflow.com/q/47858717/3235496)

The general challenge posed is to tile a given region with a given set of polyominoes.

TL;DR

GAs aren't the most appropriate tool for this kind of puzzle. GAs tend to produce good but sub-optimal solutions and this behaviour is acceptable only for some combinatorial optimization problems.

Anyway the last example proposed finds a solution almost always (often in a short time) and GAs prove themselves a general, viable path especially when previous knowledge isn't available.

Read more »

2017-12-02

Hello World! Genetic Algorithms



TL;DR

Writing a GAs system from scratch is a great learning experience but for real tasks don't reinvent the wheel.

It's not just a matter of subtleties: issues in a fresh implementation can easily go unnoticed because GAs are fault-tolerant by their very nature… at the expense of performance.


Read more »

2017-11-06

A genetic programming approach to Forex Expert Advisor generation

A genetic programming approach to Forex Expert Advisor generation

An example of how to use Genetic Programming to evolve a simple software agent, more specifically a MQL5 Expert Advisor (EA), operating on the Forex exchange market.

Read more »

2017-05-24

Scheduling concurrent jobs for several machines

Scheduling concurrent jobs for several machines

A department needs to run certain programs daily. They have access to a fixed number of machines (homogeneous machines, so each job time is independent of machine used) and have good approximations for the run time of each program.

These times are only approximate and moreover we want a schedule that is robust (in the sense that a failure of some assumption such as program time will not do too much damage). Typically one might augment them by a modest percentage; we will simply assume that has already been done and take our set of job times as given.

Read more »

2016-10-01

Pathfinding via genetic algorithms

Pathfinding via genetic algorithms

The problem we want to solve is to get from a starting point to a goal point avoiding obstacles.

NOTE. There are many efficient, ad-hoc algorithms (e.g. A*) that should be preferred for real pathfinding tasks, but this is a good example of the flexibility of GA.

Read more »


(There's also a related question / answer on Stackoverflow)

2016-06-29

How to analyze the performance of an evolutionary algorithm experimentally?



The typical approach is executing several runs of the EA and plot the average performance over time (average performance of best-of-run-individual NOT population average).

A good rule of thumb is performing a minimum of 30 runs (of course 50-100 runs is better).

The average performance is better than the best-value-achieved-in-a-set-of-runs but variance should also be taken into account.

You can see two nice examples from Randy Olson's website:

Average fitness of both algorithms over several replicates

The average fitness of both algorithms over several replicates. From this graph, we would conclude that our algorithm performs better than the current best algorithm on average.

Average fitness with a 95% confidence interval

The average fitness with a 95% confidence interval for each algorithm. This graph shows us that our algorithm does not actually perform better than the current best algorithm, and only appeared to perform better on average due to chance.


The basic breakdown of how to calculate a confidence interval for a population mean is as follows:

  1. Identify the sample mean $\bar{x}$. While $\bar{x}$ differs from $\mu$, population mean, they are still calculated the same way: $$\bar x = \sum {x_{i} \over n}$$
  2. Identify the (corrected) sample standard deviation $s$: $$s = \sqrt{\frac{\sum_{i=1}^{n}{(x_i - \bar{x})^2}} {n-1}}$$
    $s$ is an estimation of the population standard deviation $\sigma$.
  3. Calculate the critical value, $t^*$, of the Student-t distribution. This value is dependent on the confidence level, $C$, and the number of observations, $n$.


    The critical value is found from the t-distribution table (most statistical textbooks list it). In this table $t^*$ is written as $$t^*(\alpha, r)$$ where $r = n-1$ is the degrees of freedom (found by subtracting one from the number of observations) and $\alpha = {1-C \over 2}$ is the significance level.
    A better way to a fully precise critical $t^*$ value is the statistical function implemented in spreadsheets (e.g. T.INV.2T function) / scientific computing environments (e.g. SciPy stats.t.ppf), language libraries (e.g. C++ and boost::math::students_t).
  4. Plug the found values into the appropriate equation:
    $$\left({\bar x} - t^{*}{\frac {s} {\sqrt n}}, {\bar x} + t^{*}{\frac {s} {\sqrt n}}\right)$$
  5. The final step is to interpret the answer. Since the found answer is an interval with an upper and lower bound it's appropriate to state that, based on the given data, the true mean of the population is between the lower bound and upper bound with the chosen confidence level.


The more the confidence intervals of two algorithms overlap, the more likely the algorithms are to perform the same (or we haven't sampled enough to discriminate between the two). If the 95% confidence intervals don't overlap, then the algorithm with the highest average performance performs significantly better.

In EA, the source distribution is essentially never normal and what said so far formally applies only to a normal distribution!

Indeed it still tells many things. The following table summarizes the performance of the t-intervals under four situations:

Normal curveNot Normal curve
Small sample size (n<30)GoodPoor
Larger sample size (n≥30)GoodFair

For more accurate answers, non-parametric statistics are the way to go (see An Introduction to Statistics for EC Experimental Analysis by Mark Wineberg and Steffen Christensen for further details).

(First revision posted on Computer Science - Stackexchange)