Provisionally Accepted IEEE CEC 2019 Competitions





CEC-C01 Multimodal Multiobjective Optimization


Organized by Jing Liang, Boyang Qu and Dunwei Gong

Scope and Topics

In multiobjective optimization problems, there may exist two or more distinct Pareto optimal sets (PSs) corresponding to the same Pareto Front (PF). These problems are defined as multimodal multiobjective optimization problems (MMOPs). Arguably, finding one of these multiple PSs may be sufficient to obtain an acceptable solution for some problems. However, failing to identify more than one of the PSs may prevent the decision maker from considering solution options that could bring about improved performance.

The aim of this special session is to promote the research on MMO and hence motivate researchers to formulate real-world practical problems. Given that the study of multimodal multiobjective optimization (MMO) is still in its emerging stages, although many real-word applications are likely to be amenable to treatment as a MMOP, to date the researchers have ignored such formulations:

This special session is devoted to the novel approaches, algorithms and techniques for solving MMOPs. The main topics of the special session are:

  • Evolutionary algorithms for multimodal multiobjective optimization
  • Hybrid algorithms for multimodal multiobjective optimization
  • Adaptable algorithms for multimodal multiobjective optimization
  • Surrogate techniques for multimodal multiobjective optimization
  • Machine learning methods helping to solve multimodal multiobjective optimization problems
  • Memetic computing for multimodal multiobjective optimization
  • Niching techniques for multimodal multiobjective optimization
  • Parallel computing for multimodal multiobjective optimization
  • Design methods for multimodal multiobjective optimization test problems
  • Decision making in multimodal multiobjective optimization
  • Related theory analysis
  • Applications

Submission instructions

Papers should be submitted following the instructions at the IEEE CEC 2019 web site. Please select the main research topic as the Special Session on “multimodal multiobjective optimization”. Accepted papers will be included and published in the conference proceedings.

Submission deadline

7th January 2019, 23:59 (GMT)



CEC-C02 Evolutionary Multi-task Optimization


Organized by Liang Feng, Kai Qin, Abhishek Gupta, Yuan Yuan, Yew-Soon Ong, Xu Chi

Supported by IEEE CIS Task Force from Intelligent Systems Applications Technical Committee, Task Force on "Transfer Learning & Transfer Optimization"

Scope and Topics

The human possesses the most remarkable ability to manage and execute multiple tasks simultaneously, e.g., talking while walking. This desirable multitasking capability has inspired computational methodologies and approaches to tackle multiple tasks at the same time by leveraging commonalities and differences across different tasks to improve the performance and efficiency of resolving component tasks compared to when dealing with them separately. As a well-known example, multi-task learning is a very active subfield of machine learning whereby multiple learning tasks are performed together using a shared model representation such that the relevant information contained in related tasks can be exploited to improve the learning efficiency and generalization performance of task-specific models.

Multi-task optimization (MTO) is a newly emerging research area in the field of optimization, which investigates how to effectively and efficiently tackle multiple optimization problems at the same time. In the multitasking scenario, solving one optimization problem may assist in solving other optimization problems (i.e., synergetic problem-solving) if these problems bear commonality and/or complementarity in terms of optimal solutions and/or fitness landscapes. As a simple example, if some problems have the same globally optimal solution but distinct fitness landscapes, obtaining the global optimum to any problem makes the others also get solved. Recently, an evolutionary MTO paradigm named as evolutionary multitasking was proposed to explore the potential of evolutionary algorithms (EAs) incorporated with a unified solution representation space for MTO. As a population-based optimizer, EAs feature the Darwinian “survival-of-the-fittest” principle and nature-inspired reproduction operations which inherently promote implicit knowledge transfer across tasks during problem-solving. The superiority of this new evolutionary multitasking framework over traditional ways of solving each task independently has been demonstrated on synthetic and real-world MTO problems by using a multi-factorial EA (MFEA) developed under this framework.

Evolutionary multitasking opens up new horizons for researchers in the field of evolutionary computation. It provides a promising means to deal with the ever-increasing number, variety and complexity of optimization tasks. More importantly, rapid advances in cloud computing could eventually turn optimization into an on-demand service hosted on the cloud. In such a case, a variety of optimization tasks would be simultaneously executed by the service engine where evolutionary multitasking may harness the underlying synergy between multiple tasks to provide service consumers with faster and better solutions.

Due to the good response of this competition held at CEC’17 and WCCI’2018 (17 entries in CEC’17, and 13 entries in WCCI’18), we would like to continue to organize this competition at CEC’19, aiming at promoting research advances in both algorithmic and theoretical aspects of evolutionary MTO.

Please refer to the complete document for more details.

Submission instructions

Interested participants are strongly encouraged to report their approaches and results in a paper and submit it to "CEC-01 Special Session on Memetic Computing" before the CEC 2019 paper submission deadline If you would like to participate in the competition, please kindly inform us about your interest via email (mtocompetition@gmail.com) so that we can update you about any bug fixings and/or the extension of the deadline.

Submission deadline

1st May 2019, 23:59 (GMT)



CEC-C03 Online Data-Driven Multi-Objective Optimization Competition


Organized by Handing Wang, Cheng He, Ye Tian, and Yaochu Jin

Supported by IEEE CIS TF on “Intelligence Systems for Health” in the Intelligent Systems Application Technical Committee and IEEE CIS TF on “Data-Driven Evolutionary Optimization of Expensive Problems” in the Evolutionary Computation Technical Committee

Scope and Topics

Evolutionary multi-objective optimization (EMO) has been flourishing for two decades in academia. However, the industry applications of EMO to real-world optimization problems are infrequent, due to the strong assumption that objective function evaluations are easily accessed. In fact, such objective functions may not exist, instead computationally expensive numerical simulations or costly physical experiments must be performed for evaluations. Such problems driven by data collected in simulations or experiments are formulated as data-driven optimization problems, which pose challenges to conventional EMO algorithms. Firstly, obtaining the minimum data for conventional EMO algorithms to converge requires a high computational or resource cost. Secondly, although surrogate models that approximate objective functions can be used to replace the real function evaluations, the search accuracy cannot be guaranteed because of the approximation errors of surrogate models. Thirdly, since only a small amount of online data is allowed to be sampled during the optimization process, the management of online data significantly affects the performance of algorithms. The research on data-driven evolutionary optimization has not received sufficient attention, although techniques for solving such problems are highly in demand. One main reason is the lack of benchmark problems that can closely reflect real-world challenges, which leads to a big gap between academia and industries.

Submission instructions

In this competition, we carefully select 6 benchmark multi-objective optimization problems from real-world applications, including design of car cab, optimization of vehicle frontal structure, filter design, optimization of power systems, and optimization of neural networks. The objective functions of those problems cannot be calculated analytically, but can be calculated by calling an executable program to provide true black-box evaluations for both offline and online data sampling. A set of initial data is generated offline using Latin hypercube sampling, and a predefined fixed number of online data samples are set as the stopping criterion. This competition, as an event organized by the Task Force on “Intelligence Systems for Health” in the Intelligent Systems Application Technical Committee and Task Force on “Data-Driven Evolutionary Optimization of Expensive Problems” in the Evolutionary Computation Technical Committee, aims to promote the research on data-driven evolutionary multi-objective optimization by suggesting a set of benchmark problems extracted from various real-world optimization applications. All benchmark functions are implemented in MATLAB code. Also, the MATLAB code has been embedded in a recently developed software platform – PlatEMO, an open source MATLAB-based platform for evolutionary multi- and many-objective optimization, which currently includes more than 50 representative algorithms and over 100 benchmark functions, along with a variety of widely used performance indicators.

Submission deadline

15th April 2019, 23:59 (GMT)



CEC-C04 Competition on Smart Grid and Sustainable Energy Systems


Organized by Zhile Yang, Kunjie Yu and Zhou Wu

Scope and Topics

To shape a low carbon energy future has been a crucial and urgent task under Paris Global Agreement. Numerous optimisation problems have been formulated and solved to effectively save the fossil fuel cost and relief energy waste from power system and energy application side. However, some key problems are of strong non-convex, non-smooth or mixed integer characteristics, leading to significant challenging issues for system operators and energy users. This competition aims to encourage the relevant researchers to present their state-of-the-art optimisation tools for solving three featured complicated optimisation tasks including unit commitment, economic load dispatch and parameter identification for photovoltaic models and PEV fuel cells.

Unit commitment (UC) problem aims to minimize the economic cost by optimally determining the online/offline status and power dispatch of each unit, while maintaining various system constraints, formulating a large scale mixed-integer problem. Economic load dispatch is a power system operation task aiming to minimise the fossil fuel economic cost by determining the day-ahead and/or hourly power generation for each power generator. Fuel cell is one of most important energy storages in the future, particularly with the applications to vehicles and robotics. Proton Exchange Membrane is the key component of fuel cell however is of significant difficulties to be accurately modelled due to the nonlinearity, multivariate and strongly coupled characteristics. Evolutionary computation is immune from complex problem modelling formulation, and is therefore promising to provide powerful optimisation tools for intelligently and efficiently solving problems such as smart grid and various energy systems scheduling to reduce carbon consumptions.

A brief list of potential submission topics is shown below:

  • Unit commitment
  • Economic load dispatch
  • Parameters identification for photovoltaic models and PEM fuel cells

Submission instructions

This competition intends to reflect the state-of-the-art advances of evolutionary optimisation approaches for solving emerging problems in complex modern power and energy system. In this competition, we choose the above three questions as the optimization object, in order to make it easier for comparative studies of different algorithms using the same platform, and get the better optimization results. The simulate experiment and data should be expressed on MATLAB platforms or other software platforms, therefore be ranked by the results according to the competition evaluation criteria. Interested participants are strongly encouraged to report their approaches and results in a paper and submit it to our special session CEC-17 Special Session on Evolutionary Computations on Smart Grid and Sustainable Energy Systems in the conference submission system, and also send their codes to the competition organizer at zl.yang@siat.ac.cn for verification. All the papers should be submitted before the conference paper submission deadline.

Submission deadline

7th Jan 2019, 23:59 (GMT)



CEC-C05 Evolutionary Computation in Uncertain Environments: A Smart Grid Application


Organized by Fernando Lezama, Joao Soares, Zita Vale, Jose Rueda, and Markus Wagner

Supported by

Scope and Topics

Following the success of the previous edition at WCCI 2018, we are relaunching this competition at major conferences in the field of computational intelligence. This CEC 2019 competition proposes optimization of a centralized day-ahead energy resource management problem in smart grids under environments with uncertainty. This year we increased the difficulty by proving a more challenging case study, namely with higher degree of uncertainty.

Competition goals:

The CEC 2019 competition on “Evolutionary Computation in Uncertain Environments: A Smart Grid Application” has the purpose of bringing together and testing the more advanced Computational Intelligence (CI) techniques applied to an energy domain problem, namely the energy resource management problem under uncertain environments. The competition provides a coherent framework where participants and practitioners of CI can test their algorithms to solve a real-world optimization problem in the energy domain with uncertainty consideration, which makes the problem more challenging and worth to explore.

Submission instructions

Rules:

  • Participants will propose and implement metaheuristic algorithm (e.g., evolutionary algorithms, swarm intelligence, estimation of distribution algorithm, etc.) to solve the energy resource management problem under uncertainty.
  • The organizers provide a framework, implemented in MATALAB© 2014b 64 bits, in which participants can easily test their algorithms (we also provide a differential evolution algorithm implementation as an example). The guidelines include the necessary information to understand the problem, how the solutions are represented, and how the fitness function is evaluated. Those elements are common for all participants.
  • Since the proposed algorithms might have distinct sizes of population and run for a variable number of iterations, a maximum number of “50000 function evaluations” is allowed in each trial for all participants. The convergence properties of the algorithms are not a criterion to be qualified in this competition.
  • 20 independent trials should be performed in the framework by each participant.
how to submit an entry and how to evaluate them
  • The winner will be the participant with the minimum ranking index, which is calculated as the average value over the 20 trials of the expected fitness value (over the considered uncertain scenarios) plus the standard deviation
  • Each participant is kindly requested to put the text files corresponding to final results (see guideline document), as well as the implementation files (codes), obtained by using a specific optimizer, into a zipped folder named CEC2019_SG_AlgorithmName_ParticipantName.zip (e.g. CEC2019_SG_DE_Lezama.zip).

Submission deadline


Jan 7th 2019, 23:59 (GMT) (For those submitting papers to the special session)
April 30th 2019, 29:59 (GMT) (Submission without paper)



CEC-C06 Competition on 100-Digit Challenge on Single Objective Numerical Optimization


Organized by P N Suganthan, K. V. Price and Mostafa Z Ali

Scope and Topics

Research on single objective optimization algorithms often forms the foundation for more complex scenarios, such as niching algorithms and both multi-objective and constrained optimization algorithms. Traditionally, single objective benchmark problems are also the first test for new evolutionary and swarm algorithms. Additionally, single objective benchmark problems can be transformed into dynamic, niching composition, computationally expensive and many other classes of problems. It is with the goal of better understanding the behavior of evolutionary algorithms as single objective optimizers that we are introducing the 100-Digit Challenge. The SIAM 100-Digit Challenge was developed in 2002 by Nick Trefethen in conjunction with the Society for Industrial and Applied Mathematics (SIAM) as a test for high-accuracy computing. Specifically, the challenge was to solve 10 hard problems to 10 digits of accuracy. One point was awarded for each correct digit, making the maximum score 100, hence the name. Contestants were allowed to apply any method to any problem and take as long as needed to solve it. Out of the 94 teams that entered, 20 scored 100 points and 5 others scored 99. In a similar vein, we propose the 100-Digit Challenge. In contrast to the SIAM version, this 100-Digit Challenge asks contestants to solve all ten problems with one algorithm, although limited control parameter “tuning” for each function will be permitted to restore some of the original contest’s flexibility. Another difference is that the score for a given function is the average number of correct digits in the best 25 out of 50 trials.

Submission instructions

The participants are asked to submit their papers to the CEC 2019 according to the paper submission instructions. Authors are asked to email their final results in a format requested in the associated Technical Report. Three top performing algorithms will be made available online form the competition web pages.

Submission deadline

Jan 7th 2019, 23:59 (GMT)



CEC-C07 FML-based Machine Learning Competition for Human and Smart Machine Co-Learning on Game of Go


Organized by Chang-Shing Lee,Yusuke Nojima, Naoyuki Kubota, Giovanni Acampora, and Marek Reformat

Supported by Task Forces on Competitions of IEEE CIS Fuzzy Systems Technical Committee

Scope and Topics

With the success of AlphaGo, there has been a lot of interest among students and professionals to apply machine learning to gaming and in particular to the game of Go. Several conferences have held competitions human vs. computer programs or computer programs against each other. While computer programs are already better than humans (even high level professionals), machine learning still offers interesting prospects, both from the fundamental point of view (1) to even further the limits of game playing (having programs playing against each other), (2) to better understand machine intelligence and compare it to human intelligence, and from the practical point of view of enhancing the human playing experience by coaching professionals to play better or training beginners. The latter problem raises interesting questions of explainability of machine game play. This competition will evaluate the potential of learning machines to teach humans.

Submission instructions

The OpenGo Darkforest (OGD) Cloud Platform for Game of Go includes the following parts:

  • Testing Platform: OGD Platform (NCHC, Taiwan/NUTN, Taiwan/OPU, Japan/TMU, Japan)
  • Open Source: ELF OpenGo Engine, FAIR, USA
  • FML Tool: Giovanni Acampora Lab, Italy / KWS Center, Taiwan
  • Testing Data: KWS Center, Taiwan / OASE Lab., Taiwan / Nojima Lab., Japan/Saga Lab., Japan
  • Verification and Validation Go Player @ IEEE CEC 2019: Invited Top Professional Go
  • Players or Student Go Players from Japan / Taiwan / France / Italy / Canada

Metrics, Judging, and Rules

There are some possibilities:

  • Student participants make classifiers using a data set of 60 games. The participants will submit the classifier. Then, we examine the generalization ability of the submitted classifiers using an unseen data set of some games. The winner of this competition is the person who develop the most generalized classifiers for unseen games.
  • Student participants propose a new framework to predict the game result. We, organizers, do not care how the participants use the data we provide. The winner of this competition is selected from this competition committee according to the idea of the framework the participants propose.

Data

Regarding the provided data, we divide the competition 60 games between Master and top professional Go players in Dec. 2016 and in Jan. 2017. Each game's data were sampled from four sub-games. That is, there are four time windows for each game. We provide the partial data to the participants. Below is the data's descriptions:

  • We provide only the first time window of the game to the participants. That is, there are 12 inputs (M1, , M11, and AM) for each pattern (game). The participants make classifiers to predict the final result using the game situations at the beginning of the game.
  • We provide the first and second time windows of the game to the participants. That is, there are 12 2 inputs (M1, , M11, and AM) 2 for each pattern (game). The participants make classifiers to predict the final result using the half of the game situations.
  • We provide the first and second time windows of the game to the participants. That is, there are 12 3 inputs (M1, , M11, and AM) 3 for each pattern (game). The participants make classifiers to predict the final result using most of the game situations.
  • We provide the first and second time windows of the game to the participants. That is, there are 12 4 inputs (M1, , M11, and AM) 4 for each pattern (game). The participants make classifiers to predict the final result using the four time windows of the game situations.
The desired output data is as follows:
  • MODEL 1: two Classes including B+ and W+
  • MODEL 2: three Classes, including B+, U, and W+
  • MODEL 3: Five Classes, including B++, B+, U, W+, and W++
  • MODEL 4: seven Classes, including B++, B+, B-, U, W-, W+, and W++

Submission deadline

Jan 30th 2019, 23:59 (GMT)



CEC-C08 General Video Game AI Single-Player Learning Competition


Organized by Hao Tong, Ruben Rodriguez Torrado, Philip Bontrager

Scope and Topics

The General Video Game AI (GVG-AI) Competition explores the problem of creating agents for general video game playing. How would you create a single agent that is able to play any game it is given? Could you program an agent that is able to play a wide variety of games, without knowing which games are to be played and without a forward model?

The GVGAI Learning framework has been interfaced with OpenAI Gym and provides a fantastic and user-friendly environment for testing your Reinforcement Learning agents. The framework also allow users to create their own games easily to test their agents.

More about this competition can be found on the competition website (http://www.aingames.cn).

Submission instructions

The participants are invited to submit their agent via the competition website (http://www.aingames.cn). Participants are also encouraged to submit papers about this competition to the Special Session on Games (CEC-04) via the CEC2019 website.

Submission deadline


Paper submission: 7th Jan 2019, 23:59 (GMT)
Agent submission: 30th April 2019, 29:59 (GMT)

Remark: Paper submission is not mandatory, you are welcomed to participant in the competition without submitting any paper.



CEC-C09 Strategy Card Game AI Competition


Organized by Jakub Kowalski and Radoslaw Miernik

Scope and Topics

The game is a small implementation of a Strategy Card Game, designed to perform AI research. Its advantage over the real cardgame AI engines is that it is much simpler to handle by the agents, and thus allows testing more sophisticated algorithms and quickly implement theoretical ideas. Its goal is to encourage advanced research, free of drawbacks of working with the full-fledged game. It means i.a. embedding deckbuilding into the game itself (limiting the usage of premade decks), and allowing efficient search beyond the one turn depth.

All cards effects are deterministic, thus the nondeterminism is introduced only by the ordering of cards and unknown opponent's deck. The game board consists of two lines (similarly as in TES:Legends), so it favors deeper strategic thinking. Also, it is based on the fair arena mode, i.e., before every game, both players create their decks secretly from the symmetrical yet limited choices. Because of that, the deckbuilding is dynamic and cannot be simply reduced to using human-created top-meta decks.

Submission instructions

The participants are invited to submit their code via email (jko@cs.uni.wroc.pl).

Submission deadline

19th May 2019, 23:59 (GMT) (preliminary deadline)



CEC-C10 Nonlinear Equation Systems Competition


Organized by Yong Wang, Wenyin Gong and Crina Grosan

Scope and Topics

Nonlinear equation systems (NESs) frequently arise in many physical, electronic, and mechanical processes. Very often, a NES may contain multiple roots. Since all these roots are important for a given NES in the real-world applications, it is desirable to simultaneously locate them in a single run, such that the decision maker can select one final root which matches at most his/her preference. For solving NESs, several classical methods, such as Newton-type methods, have been proposed. However, these methods have some disadvantages in the sense that they are heavily dependent on the starting point of the iterative process, can easily get trapped in a local optimal solution, and require derivative information. Moreover, these methods tend to locate just one root rather than multiple roots when solving NESs.

Solving NESs by EAs is a very important area in the community of evolutionary computation, which is challenging and of practical interest. However, systematic work in this area is still very limited. The aim of competition is to facilitate the development of EAs for locating multiple roots of NESs.

Submission instructions

The participants are invited to submit a paper to our special session “CEC-61 Special Session on Evolutionary Algorithms for Nonlinear Equations Systems” in the conference submission system. Please also send the codes and results to ywang@csu.edu.cn for verification.

Submission deadline


Paper submission: 7th Jan 2019, 23:59 (GMT)
Agent submission: 30th April 2019, 29:59 (GMT)



CEC-C11 Competition on Large-Scale Global Optimization


Organized by Daniel Molina and Antonio LaTorre

Supported by IEEE CIS Task Force on Large Scale Global Optimization

Scope and Topics

In the past two decades, many evolutionary algorithms have been developed and successfully applied for solving a wide range of optimization problems. Although these techniques have shown excellent search capabilities when applied to small or medium sized problems, they still encounter serious challenges when applied to large scale problems, i.e., problems with several hundreds to thousands of variables. This is due to the Curse of dimensionality, as the size of the solution space of the problem grows exponentially with the increasing number of decision variables, there is an urgent need to develop more effective and efficient search strategies to better explore this vast solution space with limited computational budgets. In recent years, research on scaling up EAs to large-scale problems has attracted significant attention, including both theoretical and practical studies. Existing work on tackling the scalability issue is getting more and more attention in the last few years.

This special session is devoted to highlight the recent advances in EAs for handling large-scale global optimization (LSGO) problems, involving single objective or multiple objectives, unconstrained or constrained, binary/discrete or real, or mixed decision variables. More specifically, we encourage interested researchers to submit their original and unpublished work on:

  • Theoretical and experimental analysis on the scalability of EAs;
  • Novel approaches and algorithms for scaling up EAs to large-scale optimization problems;
  • Applications of EAs to real-world large-scale optimization problems;
  • Novel test suites that help researches to understand large-scale optimization problems characteristics.

Submission instructions

The competition allows participants to run their own algorithms on 15 benchmark functions, each of them of 1000 dimensions. Detailed information about these benchmark functions is provided in the following technical report:

X. Li, K. Tang, M. Omidvar, Z. Yang and K. Qin, “Benchmark Functions for the CEC’2013 Special Session and Competition on Large Scale Global Optimization,” Technical Report, Evolutionary Computation and Machine Learning Group, RMIT University, Australia, 2013.

Source code is available in the website, for C++, MatLab, Java and Python.

The technique and the results can be reported in a paper for the corresponding special session. The authors must provide their results as shown in the aforementioned technical report (Table 2). In order to make it easier to obtain the results in the requested format, the original source code of the benchmark has been modified to automate this task (except in Java version). Additionally, several tools are provided to create an Excel file with the results as recorded by the modified code and the latex table to allow its easy inclusion in the paper.

In order to help researchers to compare their proposals with previous winners, we have developed a website https://tacolab.org, that allows researchers to compare the data of their proposal (provided in an Excel file) with those of previous algorithms. Several reports, both as tables and figures, can be automatically generated by this tool (and exported to be included in the manuscript), including, in the report LSGO Competition plots the criteria used in the competition.

Submission deadline


Paper submission (including the Special Sessions): 7th Jan 2019, 23:59 (GMT)
Competition submission: 7th Feb 2019, 29:59 (GMT)



CEC-C11 Divide-the-Dollar Competition


Organized by Daniel Ashlock and Garrison Greenwood

Scope and Topics

The conventional divide-the-dollar game is a two player game where the players simultaneously bid on how to divide a dollar. If the bids sum to a dollar or less each player receives their bid, otherwise they receive nothing. This contest is based on the generalized divide the dollar game, which has N ≥ 2 players. In this game, instead of dividing a dollar, a scoring set, S ⊂ RN is used. Each player bids a point coordinate and, if the resulting point is in the scoring set, then the players receive their bid, otherwise nothing. The players will be given several example sets, similar to optimization problems in an optimization contest, to train a general purpose agent for learning a generalized divide the dollar problem from feedback. Each participant will upload an agent to play a generalized divide the dollar game.

The contest will use sets not seen by the players before and will be restricted to the two-player version. All sets satisfy x,y ∈ R2 with X ≥ 0, y ≥ 0, and x,y ≤ 2. Sets will consist of one or more simply connected regions. Agents will participate in a round-robin tournament with the score on each set recorded. During play the players will be given feedback in the form of each players bid and the outcome (score/no score). Agents will also have access to the history of bids each agent has made and if that bid scored. Winners will be determined for each problem test set and an overall winner with the best average score over all of the problem test sets. Agents can be designed using any computational intelligence technique. Contest participants will upload a framework in Java for their agent through the competition website. The uploaded agent must be a standalone agent. Each participant may submit only one agent to the contest. Each participant is expected to submit a short paper (23 pages) describing their agents structure and the computational intelligence methods used to construct and train it. Papers will be orally presented during the special session on games and will appear in the conference proceedings. Winners will be announced during the special session.

This contest is intended as a successor to the contests for prisoner’s dilemma, with generalized divide the dollar being a more complex game with a far larger strategy space. The contest organizers have published at least one agent representation that can play this game, but adapting to unknown scoring sets is a challenge that is likely to spark research in agent representations and advance the theory and practice of mathematical games in evolutionary computation.

Submission instructions

TBA

Submission deadline

TBA