July 2021: GBEA Competition at GECCO 21, Lille
September 2020: GBEA Competition at PPSN 20, Leiden
August 2019: Tutorial on GBEA at CoG 19, London
July 2019: GBEA workshop at GECCO 19, Prague
August 2018: Tutorial on Ranking in Games at CIG 18, Maastricht
July 2018: GBEA workshop at GECCO 18, Kyoto
We are proposing a competition with multiple tracks that addresses several different research questions all featuring continuous search spaces:
The competition is further available in a single- and bi-objective version, thus resulting in 4 different tracks.
The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration. The winners for the above questions will be determined independently.
Details available here.
We are proposing a competition with multiple tracks that addresses several different research questions all featuring continuous search spaces:
The competition is further available in a single- and bi-objective version, thus resulting in 4 different tracks.
The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration. The winners for the above questions will be determined independently.
We propose to use benchmarks for the systematic analysis of AI-assisted game design approaches. In the tutorial, we demonstrate how the Game-Benchmark for Evolutionary Algorithms (GBEA) can be used for the purpose of gaining a detailed understanding of a given PCG problem. We further give examples of how this information can be used for improving the PCG algorithm.
More details can be found here.
Ranking mechanisms play an important part in games (tournaments for AI / human players, match-making, benchmarks). We present game theoretic approaches to ranking mechanisms, such as social choice theory, and interpret there strengths and weaknesses in the context of existing game-related applications.
This relates to the GBEA as one important group of game-related problems is the hyperparameter optimisation of game AI. Fitness functions for these problems will be based on some form of ranking as discussed in the tutorial. The slides are available here.
The COCO (COmparing Continuous Optimisers) framework is a well-established benchmark for continuous optimisers in the EC community. However, it currently lacks support for real-world-like functions. We provide test functions inspired by game-related problems and integrate them into COCO, thus making available all the existing post-processing and analysis features. In an attempt to characterise the functions better, we run established algorithms and compute ELA features using flacco.
Below is the workshop programme along with further resources, such as slides, computation results and plots. The slides for the complete workshop can be found here.