BBO Discussion Forums: 2 h or 1 s - BBO Discussion Forums

Jump to content

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic

2 h or 1 s response

#21 User is offline   Trinidad 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 4,531
  • Joined: 2005-October-09
  • Location:Netherlands

Posted 2021-November-10, 03:50

View PostLBengtsson, on 2021-November-07, 15:26, said:

you should try to show support for your partner's suit at the earliest time, but here I bid 1 with 53. it is forcing if you are not a passed hand. partner may have 54in his hand then you have a double fit when he raises . that will allow you to reassess your hand.

You don't need to bid spades yourself. If partner has a game invitational hand with 4 spades and 5 hearts, he will bid 2 after 1-2. Partner will intend this as a long suit game try to invite to 4, but you can bid 3 to show your hand. Partner will than bid 4 if it he has four spades (and he will bid 4 if he made a long suit game try on a three card suit).

Rik
I want my opponents to leave my table with a smile on their face and without matchpoints on their score card - in that order.
The most exciting phrase to hear in science, the one that heralds the new discoveries, is not “Eureka!” (I found it!), but “That’s funny…” – Isaac Asimov
The only reason God did not put "Thou shalt mind thine own business" in the Ten Commandments was that He thought that it was too obvious to need stating. - Kenberg
0

#22 User is online   mw64ahw 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,269
  • Joined: 2021-February-13
  • Gender:Not Telling
  • Interests:Bidding & play optimisation via simulation.

Posted 2021-November-10, 04:33

View Postmikeh, on 2021-November-08, 18:01, said:

The problem with most simulations is that the designer only uses it to test how it works on the hands that fall within the parameters of the simulation.

This approach fails to take into account how one deals with the hands that used to be covered by the bids now co-opted for the convention.

Put another way….and with a very simple example….assume we wanted to see how well multi fares.

We simulate weak twos in the majors.

Fine

But to be useful we now need to simulate hands that we'd open with a weak 2D, if that's our alternative.

We might find that having to pass or open 3D costs more than we gain via multi

I've actually done this. But even with the best will in the world, it's impossible to simulate real world efficacy.

As an example, say I decide I'd open a weak 2D.

To know whether this worked, on balance, I have to make subjective decisions about how different opponents might compete…on some hands some opps would bid and others pass…and those who bid might have a choice of calls available.

Then I need to decide what partner would do.

Then I need to look at all the weak 2D hands…all 52 cards…and decide what would happen if I were to pass or to open 1D or 3D…since I can't open 2D if playing multi.

And then I'd have to do similar work on hands that would be opened multi.

Anyone who claims that they can evaluate conventions purely by simulating hands, usually with double dummy analysis and little consideration of how the other three players might act, doesn't understand simulations

Equally, anyone who claims their simulations yield objective results is fooling themselves.

Thus, while simulations can be useful, the deciding factors have to include a myriad of factors including:

Memory load

Cost of errors

Gain from using the gadget

Loss from using the gadget (there's always loss)

Loss from other uses for the bid(s)

Ripple effects on the rest of the system

Degree of difficulty created for the opps.

I defy anyone to address these adequately by way of simulations.

Anyway, you clearly enjoy what you're doing. So I wish you well. I suspect, however, that as and if you progress in th3 game, you'll see things differently. Indeed, if you don't then I predict that you won't in fact progress much.

If interested this document was the basis for this particular plagarisation with my adaptations for Kaplan Inversion.
Fixing the Forcing Notrump (examples included) v.03.pdf - OneDrive (live.com)

In terms of modelling the problem - this is the challenge I am pursuing rather the drive to compete internationally (for the time being anyway)
My background is one of financial modelling and my approach to setting up the model for simulation is likely to be more involved than most practitioners.
You have highlighted a number of challenges
  • Memory load/Cost of errors - these are human weaknesses, but can be built into a Monte Carlo simulation with random, but controlled frequency. As you imply I am sure that at some level of frequency any gains are wiped out and you would be better off playing a simpler approach.
  • Gain from using the gadget - this is a straightforward comparison to the base system run over X simulations.
  • Loss from using the gadget (there's always loss) - this goes with 2 above; you are looking for a net gain, but a statistical analysis by hand classification can pinpoint any specific weakness. The trick is to build in the cost via implementation of the scoring system in use. Intuitively one may say that finding a Major fit at the 2 level is preferable to a minor suit fit; this is your null hypothesis which can then be investigated.
  • Loss from other uses for the bid(s) - this is the same type of problem as 2 or 3, but involves more than 1 base system
  • Ripple effects on the rest of the system - again you are looking for an aggregate gain compared to your original approach so this is captured via a broad simulation
  • Degree of difficulty created for the opps. - the competitive angle is a step up in complexity, but there is enough data available to calibrate to the standard of opponents as tournaments are graded. Again unpredictability of competition can be built in. A generalised approach is difficult, but the opponents system cards provide the constraints.

0

#23 User is offline   DavidKok 

  • PipPipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 2,643
  • Joined: 2020-March-30
  • Gender:Male
  • Location:Netherlands

Posted 2021-November-10, 08:17

Many people have previously set out on applying thorough simulation, data analysis or other data-driven techniques to figuring out a stronger bidding system. All of the six challenges you state can, in theory, be addressed to satisfaction, and would result in a fantastic corpus of information for developing and testing bidding systems. Regrettably, nobody has ever successfully done so. In my limited experience, the people who have attempted this usually fail in one of several predictable ways. Often with greatly overstated confidence in their results.
It is not at all clear to me that you will do better than the historical success rate of projects like these, which is close to 0. This is why people refer to experts, results from high-level tournaments and 'common practice' - not because bridge players are luddites, but because it's the best source available. Put more charitably, there is a huge amount of bridge knowledge and expertise that is already available. You can (and probably should) use this as a starting point for more thorough investigation. The only hurdle is that this information is called 'expert practice', and is not always in a format that appeals to a scientific mindset (but this does not make it any less valid).

To be specific, I think points 1, 5 and 6 on your list are decisive for determining the value of a treatment, and I have next to no confidence that you are able to address these sufficiently well.
1

#24 User is online   mw64ahw 

  • PipPipPipPipPipPip
  • Group: Advanced Members
  • Posts: 1,269
  • Joined: 2021-February-13
  • Gender:Not Telling
  • Interests:Bidding & play optimisation via simulation.

Posted 2021-November-10, 08:53

View PostDavidKok, on 2021-November-10, 08:17, said:

Many people have previously set out on applying thorough simulation, data analysis or other data-driven techniques to figuring out a stronger bidding system. All of the six challenges you state can, in theory, be addressed to satisfaction, and would result in a fantastic corpus of information for developing and testing bidding systems. Regrettably, nobody has ever successfully done so. In my limited experience, the people who have attempted this usually fail in one of several predictable ways. Often with greatly overstated confidence in their results.
It is not at all clear to me that you will do better than the historical success rate of projects like these, which is close to 0. This is why people refer to experts, results from high-level tournaments and 'common practice' - not because bridge players are luddites, but because it's the best source available. Put more charitably, there is a huge amount of bridge knowledge and expertise that is already available. You can (and probably should) use this as a starting point for more thorough investigation. The only hurdle is that this information is called 'expert practice', and is not always in a format that appeals to a scientific mindset (but this does not make it any less valid).

To be specific, I think points 1, 5 and 6 on your list are decisive for determining the value of a treatment, and I have next to no confidence that you are able to address these sufficiently well.

I'm a long way from developing a Deep Blue type program, but perhaps I'll aim for one of the world computer bridge-championshipsPosted Image.
0

  • 2 Pages +
  • 1
  • 2
  • You cannot start a new topic
  • You cannot reply to this topic

5 User(s) are reading this topic
0 members, 5 guests, 0 anonymous users