Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Poker Discussion > Poker Theory
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #81  
Old 06-15-2004, 02:19 PM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Terminology

Do you think it would create more or less confusion at this point if I started using the term "strategy" for what I have been calling an ordered pair (in the case of this game) of decision-sets? Then I could use the term "strategy-adaptation function" (I think you suggested something like that in an earlier post) for what I have up to now been calling a "strategy-function."

The only thing is that I'm not about to re-write all this stuff in those terms. So, I don't want to create additional confusion by switching terminology in mid-stream. If you think it's better to switch, I'll be glad to. But if I do, please bear this change in mind when discussing anything in my previous posts (which will all be in the old terminology).

The main thing is that we have some consistent way of talking about the mathematical objects we're dealing with. We can call them "goons" and "spoons" for all I care, but it will make things much easier if we're both (or anyone else who joins in) using the same terms for the same objects.

What I'm obviously driving at here is the question of how important these strategy-adaptation functions really are. And, if we adopt that terminology, I'm working my way up to trying to show that when we've loosely been speaking of "optimal strategies," we were probably actually talking about (partial specifications of) optimal strategy-adaptation functions.

Whether or not that is the case is going to depend on the actual results we get when playing around with some different strategies in some of the games we've been examining.
Reply With Quote
  #82  
Old 06-15-2004, 05:18 PM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: Setting up the problem


Nope.

The two criteria are both needed and sufficient as I said.
So if you want to prove that some strategy-couple is not optimal, you have to prove just one
of both creteria to be wrong.
You wrote: if its not an optimal strategy-couple, then the first statement has to be wrong.
This is not true as I said, because the first one could actually be right, when the second is not.
Reply With Quote
  #83  
Old 06-16-2004, 01:07 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Setting up the problem

[ QUOTE ]
Let A_opt(beta) be the function that sends beta to the set of A-strategies with maximum EV for A, in
other words:
if alpha* is an element of A_opt(beta), then for all strategies alpha holds the following inequality:
EV(alpha,beta) <= EV(alpha*,beta)

B_opt(alpha) will be defined analogously, with the inequality
EV(alpha,beta) >= EV(alpha,beta*)

Now, for alpha* and beta* to be optimal, the following two statements need to be true:

alpha* is an element of A_opt(beta*), and
beta* is an alement of B_opt(alpha*).

[/ QUOTE ]

Suppose alpha and beta are a well-optimal strategy-couple.

Then, according to this definition, specifically beta is an element of B_opt(alpha).

Hence, for all strategies beta' the inequality (I'll use EVa just to emphasize that we're always talking about the EV from A's perspective) must hold:

EVa(alpha,beta') >= EVa(alpha,beta)

So, the existence of any beta' such that EVa(alpha,beta') < EVa(alpha,beta) means that alpha and beta are not a well-optimal strategy couple.

Specifically, for such a beta', alpha may or may not be a well-optimal strategy for A against beta'. But you can't switch to alpha' here as answer to a refutation claim. The existence of a beta' such that EVa(alpha,beta') < EVa(alpha,beta) will always prove conclusively that alpha and beta are not a well-optimal strategy-couple.

Similarly, of course, for the existence of an alpha' while holding beta constant.
Reply With Quote
  #84  
Old 06-16-2004, 02:34 PM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Still incorrect

Well, what may have been throwing this thing off and getting us involved in this huge theoretical discussion (from which I've at least learned something) is that this is a faulty strategy for B.

If A keeps his strategy, we have:

A: bluff-raises [0,1/6]
calls a raise [1/3,1]
value-raises [1/2,1]

Then B can improve by basically playing the same way:

B: bluff-raises [0,1/6]
checks-folds [1/6,1/3]
check-calls [1/3,1/2]
value-raises [1/2,1]

I get that B actually wins 1/9 with this strategy. So, B's strategy was not optimal in our previous solutions.

I'm still not sure what A's best counter to this strategy is, though. More on that in another post.
Reply With Quote
  #85  
Old 06-16-2004, 03:24 PM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default nope

I get 1/18 (for A), with your counterstrategy, which is the same as in my optimum.

Check for errors, please?

Or, if you agreed with my EV-formula, you can get my answer from there.

Next Time.
Reply With Quote
  #86  
Old 06-16-2004, 03:30 PM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: nope

Yes, you're right. Hmmmmm... But, then if the strategy works for A, why doesn't it do any good for B? (that's a bit of rhetorical question, which I'll try to figure out, but if you have an answer, please let me know!)
Reply With Quote
  #87  
Old 06-16-2004, 03:32 PM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: nope

[ QUOTE ]
Yes, you're right. Hmmmmm... But, then if the strategy works for A, why doesn't it do any good for B? (that's a bit of rhetorical question, which I'll try to figure out, but if you have an answer, please let me know!)

[/ QUOTE ]

I don'y fully understand your question.
Could you please rephrase it?
Reply With Quote
  #88  
Old 06-16-2004, 03:44 PM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: nope

This game is perfectly symmetrical except for the order in which players take action.

So, if B checks, A is in exactly the same position that B was previously regarding the decision of whether to check or raise. So, basically, if a strategy is optimal for one player, it should be optimal for the other player, too??
Reply With Quote
  #89  
Old 06-16-2004, 03:55 PM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: nope

It is not symmetrical, because player B already checked, hence it's not [0,1] anymore!
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 04:42 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.