GBEA Events

Events: Past and present

Here is a continously updated list of upcoming workshop, tutorials and other events.
Upcoming Events

July 2021: GBEA Competition at GECCO 21, Lille

Past Events

September 2020: GBEA Competition at PPSN 20, Leiden

August 2019: Tutorial on GBEA at CoG 19, London

July 2019: GBEA workshop at GECCO 19, Prague

August 2018: Tutorial on Ranking in Games at CIG 18, Maastricht

July 2018: GBEA workshop at GECCO 18, Kyoto

GECCO Competition (2021, Lille)

Description

We are proposing a competition with multiple tracks that addresses several different research questions all featuring continuous search spaces:

  • Targets: The task is to find solutions of sufficient quality (as specified by the target) as quickly as possible (measured in number of function evaluations).

The competition is further available in a single- and bi-objective version, thus resulting in 4 different tracks.

The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration. The winners for the above questions will be determined independently.

Details available here.

PPSN Competition (2020, Leiden)

Description

We are proposing a competition with multiple tracks that addresses several different research questions all featuring continuous search spaces:

  1. Dimension of latent space: Which size of latent space is best?
    The task for competitors is to optimise the problems for various dimensions and we will compare the discovered solutions across dimensions.
  2. How can levels be concatenated?
    The task for competitors is to optimise the regular and concatenated versions of the same problems. We will analyse the patterns of discovered solutions in latent space.

The competition is further available in a single- and bi-objective version, thus resulting in 4 different tracks.

The GBEA uses the COCO (COmparing Continuous Optimisers) framework for ease of integration. The winners for the above questions will be determined independently.

CoG Tutorial (2019, London)

What benchmarks can teach us about AI-assisted game design.

We propose to use benchmarks for the systematic analysis of AI-assisted game design approaches. In the tutorial, we demonstrate how the Game-Benchmark for Evolutionary Algorithms (GBEA) can be used for the purpose of gaining a detailed understanding of a given PCG problem. We further give examples of how this information can be used for improving the PCG algorithm.

Outline
Based on an example, we want to discuss the following topics in our talk in the order as indicated below:
  • Why analyse PCG problems?
  • Given we have a problem: how do we compile a benchmark?
  • What analysis is possible and which tools are provided in GBEA? (how to apply them)
  • How can the analysis results be used to improve PCG algorithms?

GECCO workshop (2019, Prague)

Extended discussions on GBEA and benchmarking in general.
  • Game Benchmark for Evolutionary Algorithms: An Overview
    Vanessa Volz, Tea Tusar, Boris Naujoks, Pascal Kerschke
  • Paper: Game AI Hyperparameter Tuning in Rinascimento
    Ivan Bravi, Vanessa Volz, Simon Lucas
  • Panel: How can EC and games researchers learn from game benchmarking results?
    Dimo Brockhoff, Jonathan Fieldsend, Mike Preuss, Simon Lucas
  • General discussion on (real-world) benchmarking
    Vanessa Volz, Tea Tusar, Pascal Kerschke, Boris Naujoks

More details can be found here.

CIG Tutorial (2018, Maastricht)

Ranking Mechanisms in Games.

Ranking mechanisms play an important part in games (tournaments for AI / human players, match-making, benchmarks). We present game theoretic approaches to ranking mechanisms, such as social choice theory, and interpret there strengths and weaknesses in the context of existing game-related applications.

This relates to the GBEA as one important group of game-related problems is the hyperparameter optimisation of game AI. Fitness functions for these problems will be based on some form of ranking as discussed in the tutorial. The slides are available here.

GECCO Workshop (2018, Kyoto)

First overview and discussions on GBEA benchmark.

The COCO (COmparing Continuous Optimisers) framework is a well-established benchmark for continuous optimisers in the EC community. However, it currently lacks support for real-world-like functions. We provide test functions inspired by game-related problems and integrate them into COCO, thus making available all the existing post-processing and analysis features. In an attempt to characterise the functions better, we run established algorithms and compute ELA features using flacco.

Below is the workshop programme along with further resources, such as slides, computation results and plots. The slides for the complete workshop can be found here.

Workshop Programme
  1. Welcome and Schedule
  2. Background
  3. Benchmark: Online Documentation
  4. Discussion