The robots played all 320 hands available in the tournament. In a more realistic simulation, the robots would have played 48 boards each, like the players who participated.
The robots are:
- GIB Advanced
- GIB Basic
- Argine advanced
- Argine basic
- "ACBL-Ben" - a version of Lorand Dali's Ben robot trained on GIB for the bidding and on hands taken from ACBL human games for the play (more about that later).
You can see how the robots scored here.
For those who don't know, we're happy to announce that Lorand Dali has joined BBO's team this year.
Disclaimer: as this tournament is "just-declare", it only measures the play of hand, which put the "acbl ben" model at a disadvantage, as the play of hand is its weaker part. It plays like the average acbl field and, as per Lorand, it doesnt "think" too much. So take this experiment with a grain of salt, but we still thought it was fun to do.
Disclaimer 2: Lorand is experimenting with other variations of Ben, including one he calls 'the thinking Ben', which are likely stronger than this particular model. These various 'Bens' are not yet close to being implemented in BBO, and there is no intention to replace GIB with Ben. However, we believe that these experiments contribute to improving BBO's robots, and go a long way towards making bridge robots stronger, faster, and more likeable.
A summary here, if you don't want to click around through the tournament results:
| Rank | MP | Score (%) | Robot Type | Robot Details | |-----------|-----|-----------|--------------|------------------------------------| | 60/618 | N/A | 56.30 | GIB Advanced | Advanced GIB Robot | | 93/618 | N/A | 55.23 | Argine | Argine Robot | | 123/618 | N/A | 54.29 | GIB Basic | Basic GIB Robot | | 232/618 | N/A | 51.42 | Argine Basic | Simplified Argine Robot | | 486/618 | N/A | 39.47 | ACBL Ben | Ben model trained on ACBL human deals |