Game-Benchmark for Evolutionary Algorithms

Looking for GECCO 2021 Competition? Click here.

Game-Benchmark: But WHY?

On the one hand: Multiple game-related competitions at GECCO and CIG for algorithms, no systematic analysis and comparison. On the other hand: Benchmarking analysis tools based on artificial testfunctions. Now: Game-Benchmark!

Games are a very interesting topic that motivates a lot of research. For instance, in several keynotes (e.g. IJCCI'15, EMO'17, CEC'18), blogs and papers, games have been suggested repeatedly as testbeds for AI algorithms. The most commonly cited reasons for this suggestion are based on key features of games, such as controllability, safety and repeatability, but also the ability to simulate properties of real-world problems such as measurement noise, uncertainty and the existence of multiple objectives. One key advantage when compared to other simulations are that in case of games, the application itself can be used as a test framework, so the problem does not need to be modelled. Additionally, games provide a context that facilitates data collection from human survey participants because generally, large communities are interested in playing specific games.

OK... and HOW?

Diverse suite of test functions for COCO framework.

The COCO (COmparing Continuous Optimisers) framework is a well-established benchmark for continuous optimisers in the EC community. However, it currently lacks support for real-world-like functions. We provide test functions inspired by game-related problems and integrate them into COCO, thus making available all the existing post-processing and analysis features.

Cool! WHAT can I do?

GBEA is continuously growing. You can participate in several ways! In any case, head over to our main participation page and check the schedule!

If you have been working on game-related problems before, please consider getting into contact with us, so we can potentially include it in the benchmark. We provide three ways to make your problem available to make it as easy as possible. Also you will receive the solutions selected state-of-the-art algorithms computed for your problem. What is more, you will have a deeper insight into your problem based on our landscape analysis and discussion.

If your research field is evolutionary algorithms, you have probably used benchmarks to evaluate your work. We have found the amount of real-world benchmarks to be lacking, what are your solutions? Do you have any other issues with existing benchmarks or the GBEA functions specifically. Join our discussion!

Finally, if you are around, why not join our events? We are looking forward to a fruitful discussion between researchers from different fields. Hopefully, at the end we will have a benchmark that helps everyone.

WHO do I ask?

In case of questions, please contact us on our slack channel.
To sign up, please click here.

Organiser

Vanessa Volz

Queen Mary University of London

Organiser

Boris Naujoks

TH Köln - University of Applied Sciences

Organiser

Tea Tušar

Jozef Stefan Institute

Organiser

Pascal Kerschke

WWU Münster