BBO Discussion Forums: State of the Art4 - BBO Discussion Forums

Jump to content

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • This topic is locked

State of the Art4 The state of play

#1 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-February-03, 02:15

:) No examples. In this post I try to set out as clearly and as succinctly as I can my view of the current state of computer programming of bridge play. I will ignore computer bidding: bidding based on random simulations is on a par with using Buller's British Bridge as your bidding system. :lol:

There are two main approaches to writing a computer bridge program to reproduce human card-play:

The first is the "monte-carlo" or random simulation approach. This is a "one size fits all" type of approach requiring only a limited number of algorithms.

It involves a dealer program which provides random hands for the concealed players constrained by the known cards in the open hands( the player and dummy)The number of random hands dealt is limited by the need to keep delays within acceptable limits. This is merely a quick way of approximating to the true probabilities of the distribution of the unknown cards.

Next everyone of these sample distributions is examined on a double-dummy basis and the acceptable card or cards recorded.Then the card which is recorded on the greatest number of samples is played. If there is more than one card the choice between them is either made at random or according to some rule set in advance. We have seen that either method can lead to seriously weird results.

The major advantage of the random simulation approach is that it is relatively quick to program - one estimate is 2 years to produce to develop a computer champion standard program. The great disadvantage is that it plays poorly on many types of deals and the timing of individual plays is decidedly eccentric. :(

The second approach may be labelled "pragmatic reasoning". This requires a much longer period of development with separate groups of algorithms for every type of play, and the computer declarer at least is also required to formulate an initial plan.

There are several programs on the market which could be classed as following this approach but I have not found any where programming has gone beyond a few basic plays.

However Fred Gitelman described such a program in 1992 and estimated developing it would take a further 5 years on top of the work he had already put into Base III. I fear his estimate may prove conservative.

Bridge Baron seem to have used a simpler program of this type, which they called Tignum2, when they won the championship in 1997. Unfortunately they lost to GIB in 1998 and seem to have reverted to a monte-carlo clone thereafter.

Various academic papers (lists available) have discussed the limitations of random simulations and have formulated approaches to reproducing declarer play but no complete system or program has emerged so far.

To date, the random simulation programs have the edge in competitive play; I would say largely because no one has been prepared to devote the time necessary to develop a pragmatic program capable of beating them.

I would dearly like to see a fully developed pragmatic program before I die because it would offer so much more than a random simulation. It would be a program worth queuing in the rain for :rolleyes:

I do see one slander ray of hope. Each year the programmers return from the championship and settle down to improving their programs competitiveness. Similar to Barmer and Inquiry improving GIB in response to our complaints.

Now random simulations can be improved by:

removing bugs

removing errors in the dealer or double-dummy programs

increasing the number of samples, this cannot go beyond making the probabilities more nearly accurate

Thus at some time they have to realize that any further improvement must come from superimposing pragmatic reasoning on the monte-carlo results. Once this is done the door is opened to a seamless transition from a very imperfect random simulation to a potentially perfect pragmatic program. The program need never be off the market and the only visible evidence of change is that more and more deals begin to be played correctly. :)
0

#2 User is offline   inquiry 

  • PipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 14,566
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Amelia Island, FL
  • Interests:Bridge, what else?

Posted 2012-February-03, 10:03

Quote

Similar to Barmer and Inquiry improving GIB in response to our complaints.


I have absolutely nothing to do with GIB and wouldn't know how to begin to write a computer program or even some lines of code (well, ok, I did take fortran in college and have written hello world in "c"). Barmar (not barmer) does, however.
--Ben--

#3 User is offline   hrothgar 

  • PipPipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 15,372
  • Joined: 2003-February-13
  • Gender:Male
  • Location:Natick, MA
  • Interests:Travel
    Cooking
    Brewing
    Hiking

Posted 2012-February-03, 10:16

With all due respect: Do you have real significant experience with computer program in general or, better yet, in the realm of artificial intelligence?
Is there any reason that we should place any weight on your opinions?

Quote

To date, the random simulation programs have the edge in competitive play; I would say largely because no one has been prepared to devote the time necessary to develop a pragmatic program capable of beating them.


Other than the fact that this is a tautology, why should we believe that this is true?

Do you have any domain expertise?
Why should I care about your opinion?
Alderaan delenda est
1

#4 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2012-February-03, 10:54

View Postinquiry, on 2012-February-03, 10:03, said:

I have absolutely nothing to do with GIB and wouldn't know how to begin to write a computer program or even some lines of code (well, ok, I did take fortran in college and have written hello world in "c"). Barmar (not barmer) does, however.

Not sure why, but he seems to be confusing you with Georgi, who does 90% of all the GIB rule improvements.

#5 User is offline   EricK 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,303
  • Joined: 2003-February-14
  • Location:England

Posted 2012-February-03, 14:50

Warning: The below is just based on my very limited knowledge of bridge programs or of programming in general!

It's probably not original, but I think bridge programs need a two-step Monte Carlo approach. The problem with the double dummy methods as they stand is that they just look at the card to play from this hand, not the one that will be played by partner (or dummy). A classic example is, say, AT3 opposite KJ2. The 3 is a double dummy play for 3 tricks, but when LHO follows small, half the time you have to play the K and half the J. So the 3 will appear to be a much better play than it ought, and the computer will only find out too late, that it has a guess.

So what could happen is the Monte Carlo runs as normal, then the top two or three results are analysed to see if they are actually one play (eg small to the King) or two different plays (eg small to the King or Jack depending on where the Q happens to be). Maybe this could be done by seeing if after LHO plays small whether every possible card is significantly worse, double-dummy, than the original estimate the play.
0

#6 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-February-03, 20:52

View Postinquiry, on 2012-February-03, 10:03, said:

I have absolutely nothing to do with GIB and wouldn't know how to begin to write a computer program or even some lines of code (well, ok, I did take fortran in college and have written hello world in "c"). Barmar (not barmer) does, however.
My apologies to Barmar and to you. My spelling has become atrocious and unfortunately BBO's spelling checker does not cover user names! :rolleyes:
0

#7 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-February-03, 21:04

View Posthrothgar, on 2012-February-03, 10:16, said:

With all due respect: Do you have real significant experience with computer program in general or, better yet, in the realm of artificial intelligence?
Is there any reason that we should place any weight on your opinions?



Other than the fact that this is a tautology, why should we believe that this is true?

Do you have any domain expertise?
Why should I care about your opinion?


You should respect anyone's opinions if they are logical and founded on truth. I think you should be able to check my statements.

I have re-read the sentence you quote but fail to recognize any tautology. Perhaps it is like Thurber's English teacher at high school who ruined Shakespeare for him. He used to dream of walking through the forest of Arden and having figures of speech jump out from behind every tree.
0

#8 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-February-03, 21:14

View Postbarmar, on 2012-February-03, 10:54, said:

Not sure why, but he seems to be confusing you with Georgi, who does 90% of all the GIB rule improvements.


My apologies for misspelling your user name. My spelling and my memory are getting atrocious. Thank goodness for spelling checkers.

My error in respect to Inquiry is actually a compliment: He analysed one of my example deals and extended the analysis to the workings of the underlying computer program. He convinced me that he was familiar with Monte Carlo simulations.
0

#9 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-February-03, 21:25

View PostEricK, on 2012-February-03, 14:50, said:

Warning: The below is just based on my very limited knowledge of bridge programs or of programming in general!

It's probably not original, but I think bridge programs need a two-step Monte Carlo approach. The problem with the double dummy methods as they stand is that they just look at the card to play from this hand, not the one that will be played by partner (or dummy). A classic example is, say, AT3 opposite KJ2. The 3 is a double dummy play for 3 tricks, but when LHO follows small, half the time you have to play the K and half the J. So the 3 will appear to be a much better play than it ought, and the computer will only find out too late, that it has a guess.

So what could happen is the Monte Carlo runs as normal, then the top two or three results are analysed to see if they are actually one play (eg small to the King) or two different plays (eg small to the King or Jack depending on where the Q happens to be). Maybe this could be done by seeing if after LHO plays small whether every possible card is significantly worse, double-dummy, than the original estimate the play.


Perhaps not so limited? Remember however that a double dummy analysis knows exactly where the Q is so it sees the play as 100%. The probability is introduced by the sampling technique and this may be inaccurate.

I think the real trouble is that a Monte Carlo technique is unlikely to spot a throw-in making the finesse unnecessary. I am not sure why this should be so. My observations (still anecdotal rather than the result of repeated trials)seem to show this.

What you suggest in your last paragraph would improve these programs immensely but you would have to cover an awful lot of cases and this would take time.
0

#10 User is offline   Free 

  • mmm Duvel
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,728
  • Joined: 2003-July-30
  • Gender:Male
  • Location:Belgium
  • Interests:Duvel, Whisky

Posted 2012-April-16, 10:56

Quote

I would dearly like to see a fully developed pragmatic program before I die because it would offer so much more than a random simulation.

My suggestion: first try to prove that a pragmatic approach would provide significantly better results. If you can't, you better program it yourself, because businesses won't jump on a project which requires enormous amounts of time and effort (= money) for minimal or no gain.

If developers can teach their current programs the art of discovery plays, they would already improve dramatically. You could call this some sort of hybrid approach, whatever, but the DD basis is imo the way to go. Improve computing power and their results will also improve - for minimal or no costs at all!
"It may be rude to leave to go to the bathroom, but it's downright stupid to sit there and piss yourself" - blackshoe
0

#11 User is offline   kgr 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,423
  • Joined: 2003-April-11

Posted 2012-April-16, 13:56

I wonder if the latest versions run a DD simulation from 1 hand. It should run a DD simulation for all hands.
I mean: It does not take into account that e.g. the declarer has a guess?
0

#12 User is offline   Scarabin 

  • PipPipPipPip
  • Group: Full Members
  • Posts: 382
  • Joined: 2010-December-30
  • Gender:Male
  • Interests:All types of games especially bridge & war games.
    old bidding systems & computer simulation programming.

Posted 2012-April-16, 22:28

View PostFree, on 2012-April-16, 10:56, said:

My suggestion: first try to prove that a pragmatic approach would provide significantly better results. If you can't, you better program it yourself, because businesses won't jump on a project which requires enormous amounts of time and effort (= money) for minimal or no gain.


Good suggestion but it is difficult to prove this unless you already have a good pragmatic program. The work involved in writing a really good program would be prohibitive: first i would have to write a fast,accurate double-dummy program, second I would have use this to write a monte-carlo simulation, third I would have to write a pragmatic program incorporating a single-dummy program like suit-play and I would probably have to write all this in Prolog so that the pragmatic program could be open ended. I do not think I would ever finish and I do not think I would even start without a good team of programmers! Of course an author with an existing random simulation program can expand this with pragmatic supplements.

View PostFree, on 2012-April-16, 10:56, said:

If developers can teach their current programs the art of discovery plays, they would already improve dramatically. You could call this some sort of hybrid approach, whatever, but the DD basis is imo the way to go. Improve computing power and their results will also improve - for minimal or no costs at all!


Agree with your first conclusion and with your second so long as the considerations are purely financial. However I like a hybrid approach and dislike a double dummy approach. Monte carlo simulations are only acceptable so long as they do not attain double dummy accuracy. If they do they become tantamount to cheating!

Do you agree?
0

#13 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2012-April-17, 08:24

View Postkgr, on 2012-April-16, 13:56, said:

I wonder if the latest versions run a DD simulation from 1 hand. It should run a DD simulation for all hands.
I mean: It does not take into account that e.g. the declarer has a guess?

What does this mean? DD means all players can see all 4 hands, and make the most advantageous plays.

When not using the GIBson algorithm, GIB deals a bunch of hands that it thinks are consistent with the bidding, then runs DD simulations of each of them.

#14 User is offline   kgr 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 3,423
  • Joined: 2003-April-11

Posted 2012-April-17, 09:03

View Postbarmar, on 2012-April-17, 08:24, said:

What does this mean? DD means all players can see all 4 hands, and make the most advantageous plays.

When not using the GIBson algorithm, GIB deals a bunch of hands that it thinks are consistent with the bidding, then runs DD simulations of each of them.
Suppose that GIB is West and knows from the bidding that South has Kxxx

South plays A and GIB deals a bunch of hands that it thinks are consistent with the bidding, then runs DD simulations of each of them.
GIB sees that South will always make all tricks DD. So it can as well discard a as a . Then there are only 2 s out and South can play them from the top.
So when West runs the DD simulation it should run for every deal it generates for South a DD simulation from Souths viewpoint.
...I'm not sure if something like that is done.
0

#15 User is offline   Free 

  • mmm Duvel
  • PipPipPipPipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 10,728
  • Joined: 2003-July-30
  • Gender:Male
  • Location:Belgium
  • Interests:Duvel, Whisky

Posted 2012-April-17, 10:21

I've been thinking about DD solvers in the past, and I came to the conclusion that in theory, the ultimate (and still relatively easy) way to go is to let computer players make a decision based on SD results rather than DD results. This would require an enormous (exponential) amount of extra computing power (which makes it quite impossible at the moment), but it would definitely solve many issues GIB and other DD-based programs have while it keeps the good results from these programs. I think kgr is suggesting the same, but it's quite difficult to explain. ;)

- kgr's example would easily be solved. Since all South's cards are equally successful DD, he could play small to the Ace in lots of situations. The SD result will mean that discarding a has more chance of success than discarding a (around 50% vs 0%).

- When there is a 100% line, it would still find it because SD results would also calculate a certain line as 100% successful.

- I'm not sure what would happen with playing some random cards, blocking suits, discarding Kings,... However, when he's squeezed with Kx-A after dummy's AQ and declarer's x-K, then he would discard a because SD he still has a chance of declarer taking the finesse (unless HCP clearly* show he's got both honors) while discarding K or A always leads to defeat.
(*) when there's even the slightest chance of him not having the K, then it may encounter a deal where declarer should finesse the so SD keeping the K and A will be clear
"It may be rude to leave to go to the bathroom, but it's downright stupid to sit there and piss yourself" - blackshoe
0

#16 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2012-April-17, 10:36

GIB doesn't try to "get into the opponent's head", so it doesn't know that declarer doesn't know the location of the Q. As a result, we often see boneheaded defensive plays -- declarer leads a towards dummy and West puts in the Q, taking away the guess completely. West assumes South is playing double dummy, so will always guess right, and it's not giving up anything by playing the Q.

Running simulations from the other player's viewpoint results in a combinatorial explosion -- if we consider 25 hands in a simulation, this means we'd have to consider 25*25 = 625 hands to simulate our decision and declarer's decision. Or maybe even 25*25*25 = 15625 -- declarer must consider both defenders, and defenders must consider their partner's viewpoint as well. And that's just going back one trick -- it gets worse if you try to work out why they've taken the whole line they've taken.

Humans solve this by using planning rather than simulations. We then use our imagination to figure out from the play what the opponent's plan seems to be, and then infer the likely hands. Programming this would be hard AI.

#17 User is offline   Cthulhu D 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,169
  • Joined: 2011-November-21
  • Gender:Not Telling
  • Location:Australia
  • Interests:Overbidding

Posted 2012-April-17, 21:42

View Postbarmar, on 2012-April-17, 10:36, said:

Humans solve this by using planning rather than simulations. We then use our imagination to figure out from the play what the opponent's plan seems to be, and then infer the likely hands. Programming this would be hard AI.


Curious here: why doesn't it make sense to use pre-programmed optimal play to a suit stuff (like suit-play), if simulation doesn't generated a clear best line. So have a pre-calculated table of all suit combinations and when it's declaring it looks up the one it has and makes that play.

It's probably most relevant for defence when it 'knows' that all the cards are equal from a DD perspective. It would make sense to fall back on a rules engine here. So have a default rule on what to play in second seat from any given holding.
0

#18 User is offline   Antrax 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,458
  • Joined: 2011-March-15
  • Gender:Male

Posted 2012-April-17, 22:56

I can think of two problems with this:
a) Making it too predictable. If the rule says you play a 2 and you don't play a 2, then you don't have a 2.
b) Making it dumber in some situations, where the percentage action is influenced by information it should have and suitplay won't take into account.
0

#19 User is offline   Cthulhu D 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,169
  • Joined: 2011-November-21
  • Gender:Not Telling
  • Location:Australia
  • Interests:Overbidding

Posted 2012-April-17, 23:22

View PostAntrax, on 2012-April-17, 22:56, said:

I can think of two problems with this:
a) Making it too predictable. If the rule says you play a 2 and you don't play a 2, then you don't have a 2.
b) Making it dumber in some situations, where the percentage action is influenced by information it should have and suitplay won't take into account.


A is easily addressed by saying 'pick either the 4 or the 2' from Q42 or 'random from Q or J' from QJ tight.

B Yeah, it's only relevant when it comes to the situation where all cards are double dummy equivalent - so simulations indicate that it doesn't matter what card you should play from Q42 - the root cause of the boneheaded defence plays Banmar is talking about.

If and only if simulations say 'nope, doesn't matter' is it worth resorting to pre-programmed rules. It should be computationally relatively cheap because you could pre-program it.
0

#20 User is offline   barmar 

  • PipPipPipPipPipPipPipPipPipPipPipPip
  • Group: Admin
  • Posts: 21,398
  • Joined: 2004-August-21
  • Gender:Male

Posted 2012-April-18, 08:23

Whenever I think about programming in tables like this, I think back to Mike Lawrence's book on card combinations. It's organized into chapters with several hands where declarer has to figure out how to play the same card combination, but has different inferences from the way the opponents have bid and/or played, or you're in a different contract and have different constraints. Tables of card combinations usually make simplifying assumptions, like you have adequate entries to both hands, there's no danger hand, etc. When you put them into the context of an entire hand, things aren't nearly so simple.

So while there may be times when a table like this can be used, figuring out when you're in such a situation is difficult.

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • This topic is locked

1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users