Google has beaten Go, what could it do to bridge?
#1
Posted 2016-January-28, 10:34
Rather than trying to use pure computation to conquer Go's ridiculous search space, they used deep neural networks to learn from human games, then learn from games against itself.
What do you think Google could do with this technology if they decided to turn to bridge?
Some links:
http://googleresearc...game-of-go.html
http://www.nature.co...ature16961.html
#2
Posted 2016-January-28, 10:54
#3
Posted 2016-January-28, 12:21
A few years ago Uday and I experimented with using our archives of years of BBO auctions to help GIB with its bidding -- before making a bid, it would search for similar hands and auctions, to see what the most common bids were. We just assumed that most auctions were using similar systems (mostly SAYC and 2/1), and the rest would be outliers that didn't hurt the statistics. But we weren't using this to try to teach it the basic principles of bidding, it was just being used as a sanity check: if its simulations said to make a particular bid, but the database search showed that fewer than 20% of humans chose that action, it was ruled out. But even with that limited scope, it was not very helpful -- there are so many combinations of hand types and auctions that the number of matches were generally too low for useful statistics, except for early in the auction, but those bidding rules are already pretty good.
#4
Posted 2016-January-28, 16:33
#5
Posted 2016-January-28, 18:20
The problem with bridge is you don't know where all the cards are. Whereas is chess and go you know where all the pieces are.
Bidding has improved considerably but still has a lot to be desired. When chess programs were starting to challenge masters, bridge programs were no better than beginners at bidding.
#6
Posted 2016-January-28, 20:06
steve2005, on 2016-January-28, 18:20, said:
The problem with bridge is you don't know where all the cards are. Whereas is chess and go you know where all the pieces are.
Bidding has improved considerably but still has a lot to be desired. When chess programs were starting to challenge masters, bridge programs were no better than beginners at bidding.
We should start a campaign to get google to tackle bridge next, precisely because it is not well suited for computers to solve. The challenges bridges present, in incomplete knowledge, is an interesting and a next logical step to the progression of artificial intelligence.
#7
Posted 2016-January-28, 21:31
#8
Posted 2016-January-29, 06:58
Yesterday the new focus again.Computer software has defeated the professional players in the game of Go.That's incredible, it happened too suddenly.
In general, for International chess, each step contains 35 possible changes with about 80 rounds in a chess match, but for the game of Go, every step might contain 250 possible changes with 150 rounds at least in a Go match.
Lee SeDol,South Korea Go player, he is a great player who won most champion titles in the world Go event in the recent 10 years.The match will be held this march in Seoul of Korea capital, the bonus is $1 million provided by Google,which shows strong self-confidence of Alpha Go team.
However among China, Japan and South Korea,whatever it is players or Fans, who believes computer has ability to win human top player? Never ! Does Google's Go programming own the magic of god?Even so, we would like to say Best Wishes to Alpha Go team.
#9
Posted 2016-January-29, 07:10
I think that there are some complicated issues around disclosure, but nothing that seems particularly challenging.
Even the disclosure issues seem solvable, especially if we can require that the competitors provide the computer with a corpus of hands consistent with the bidding so far.
Note that this is something that is relatively easy for a computer to do, but hard for a human.
I think that the problems with having people play against computers are likely to be on the human side, rather than the computer side...
I suspect that the reason that there is no equivalent to Deep Blue is the (relative) insignificance of the the game.
The serious academic researchers prefer to focus on games that are more popular. (Chess, Poker, Go)
#10
Posted 2016-January-29, 08:12
lycier, on 2016-January-29, 06:58, said:
There was certainly a win against a computer in 2008 due to exploiting a program bug but I cannot even remember the last time I heard a human vs computer being reported so it is difficult to say what other losses there might have been.
#11
Posted 2016-January-29, 08:17
-no f7/f2 pawn
-an exchange deficit
-3 moves behind (white had to play 1 e4, 2 d4 3 Nf3 and had the move)
All of them were drawn except the computer won in the "3 moves behind." To me this is conclusive proof that humans can't challenge computers on a level playing field. Yea yea sampling problem. But any time I see any good player asked about this, they not only agree with this assessment, they also say it is really not close at all.
https://www.chess.co...nal-battle-1331
George Carlin
#12
Posted 2016-January-29, 09:50
steve2005, on 2016-January-28, 18:20, said:
This is exactly why a neural network approach is likely to be more suitable than the techniques currently in use. Neural networks are good at detecting patterns automatically, while traditional programming requires the programmer to spell out everything precisely.
In Go, you know where all the pieces are, but that still wasn't good enough for programs to be as good as human experts. The problem is that there are still so many combinations of plays that it can't analyze it sufficiently. It's necessary to recognize patterns from experience and intuition. Describing all the patterns in a traditional program would also be overwhelming, but a neural network can learn them on its own.
#13
Posted 2016-January-29, 09:55
hrothgar, on 2016-January-29, 07:10, said:
Note that this is something that is relatively easy for a computer to do, but hard for a human.
A corpus of hands is not very helpful without telling the computer what features they have in common, and what distinguishes them from hands that would have made other bids.
This is where the neural network would excel. You'd feed it millions of hands and auctions, and it would learn on its own how the bids relate to the hands, and which features of the hand are important.
#14
Posted 2016-January-29, 14:33
After determining the date of the challenge, Lee said it is his great pleasure for him to play against artificial intelligence, " Whatever the outcome will be, it would be very meaningful event in the history of Go.I heard that artificial intelligence is unexpectedly strong, but I have confidence to win this time at least."
Many readers of course support Lee. they think the biggest differences between artificial intelligence and human are every step of computer calculation is the best choice, and the layout of human is not necessarily best, but the human is able to set a trap. Of course,many readers strongly support AlphaGo, "Don't look down on artificial intelligence, AI owns super computing power, human can do?".
I will vote for Lee Se-Dol, I think it is impossible for AlphaGo to beat human at present, even AlphaGo have beaten European Go Champion, but compared to Go champions from China,Japan and South Korea, he's too weak.
#15
Posted 2016-January-29, 15:18
#16
Posted 2016-January-29, 17:48
#17
Posted 2016-February-02, 04:00
hrothgar, on 2016-January-29, 07:10, said:
I think that there are some complicated issues around disclosure, but nothing that seems particularly challenging.
Even the disclosure issues seem solvable, especially if we can require that the competitors provide the computer with a corpus of hands consistent with the bidding so far.
Note that this is something that is relatively easy for a computer to do, but hard for a human.
I think that the problems with having people play against computers are likely to be on the human side, rather than the computer side...
I suspect that the reason that there is no equivalent to Deep Blue is the (relative) insignificance of the the game.
The serious academic researchers prefer to focus on games that are more popular. (Chess, Poker, Go)
I doubt that any of these claims or reasons are valid.
It is also a myth that there has been no serious academic effort.
The question, how challenging or complex a game is for the human mind, is not the decisive criteria whether we will see software beating a human or not.
Many seem to believe if only IBM, Google Apple or the like would provide some serious resources we would see such a Bridge computer tomorrow.
I doubt that.
No serious amount of resources would have put a man on the moon in the 19th century.
You need good ideas and a strategy before resources (money) can be put into a productive investment.
And even if these conditions are present there is no guarantee of success.
For example despite the efforts and money so far spent, we always seem to be 50 years away from the first commercial fusion reactor.
I believe compared to Chess or Go, the challenges are very different for putting Bridge logic into software.
To mention just two:
Bridge is a game of incomplete information, Chess and Go are not.
Bridge is a partnership game played against another partnership, neither Chess nor Go are.
This already provides a challenge in definition what a good Bridge program would be.
One, who plays with a clone of itself as partner? That is like identical twins playing Bridge together.
It is entirely possible that identical twins could be world class when playing together, but be only mediocre when playing with others.
They certainly would have a big advantage when playing in a partnership.
Not my definition of a great Bridge player. But this is the way Computer Bridge championships are played today.
A more suitable comparison would be two independently developed Bridge software programs playing against an expert partnership.
Two neuronal networks might be acceptable as partners, provided they were primed by independent deals and experience.
I am not saying we will not see eventually bridge computers being capable of beating experts, but there are good reasons why so far nobody came close.
Rainer Herrmann
#18
Posted 2016-February-02, 04:12
George Carlin
#19
Posted 2016-February-02, 06:41
#20
Posted 2016-February-02, 06:44
_________________
Valiant were the efforts of the declarer // to thwart the wiles of the defender // however, as the cards lay // the contract had no play // except through the eyes of a kibitzer.