Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Poker Discussion > Poker Theory
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #71  
Old 06-15-2004, 06:49 AM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: Challenge to well (or anyone else)

[ QUOTE ]

However, I don't think that's the way most people are using the term "strategy." Since, on your scheme, you don't even deal with the things I call "strategies," but on my scheme, the things you call "strategies" are clearly defined as decision-sets (or to be really picky: n-tuples of decision sets, but I don't think we're going to get into any trouble by being sloppy in usage on that count), I'd like to stick to my terminology for the moment for the simple reason that that at least allows us to keep track of what we're talking about.

[/ QUOTE ]

In my opinion you calling your functions strategies is a reason for readers to lose track.
Since you are introducing something you came up with yourself, as a better or nicer or whatever
approach to the problems, it can't be wize to use terms that in other people's minds have a
different meaning.

[ QUOTE ]

Ok, in the last part of it, there's just one thing that I think needs to be spelled out. alpha* and beta* are supposed to BOTH be optimal relative to each other. Hence, there is no guarantee that 2 such decision-sets exist. That's the reason I go through all of that with the the equilibrium question.

[/ QUOTE ]

Optimal strategies as a solution to a game come as couples, hence they're defined as such.
The function A_opt(returns) the set of optimal strategies against B's beta.
So maybe, to avoid confusion, I would better have called it A_best or something.

I'll have a look at your EV-example now.


In fact, if my calculations are correct on the game we talked about, I don't think they do (I'll double-check my math before presenting this as a refutation). Basically, on that game we defined certain decision-sets alpha and beta for A and B. If your definition of "optimal" corresponds to the one we were intuitively using, then alpha is a member of A_opt(beta) and beta is a member of B_opt(alpha).

I don't think beta is a member of B_opt(alpha). That's why I don't think your definition will work.

[/ QUOTE ]
Reply With Quote
  #72  
Old 06-15-2004, 07:10 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Setting up the problem

Ok, let's consider the following decision-schemes for A and B in this game:

B: value-raises [2/3,1]
bluff-raises [0,1/9]
check-calls [1/3,2/3]

A: calls a raise [1/3,1]
value-raises [1/2,1]
bluff-raises [0,1/6]

The value of this game is according to well 1/18 for A (I haven't double-checked that, but I'm assuming it's correct)

We both agree that in some intuitive sense these "strategies" (on my definition of the term, we've only explicitly defined a small part of a "strategy," but let's forget about that for the moment) are "optimal." The question is are they "well-optimal" relative to one another? I'm going to use "well-optimal" here in the obvious way, as simply corresponding to well's definition of "optimal."

Let's call A's decision-set here alpha and B's decision-set beta.

Now, in order to show that these strategies are not well-optimal relative to one another, all I have to show is that there is some other decision-set beta' for B such that EV(alpha,beta) < EV(alpha,beta') where EV is considered here as EV for B. Hence, on the current decision-sets, we have an EV for B of -1/18.

If this were the case, then "well-optimal" would NOT correspond to whatever intuitive sense we were using the term "optimal" when we agreed on these "strategies."

Correct?
Reply With Quote
  #73  
Old 06-15-2004, 07:20 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Challenge to well (or anyone else)

[ QUOTE ]
[ QUOTE ]

However, I don't think that's the way most people are using the term "strategy." Since, on your scheme, you don't even deal with the things I call "strategies," but on my scheme, the things you call "strategies" are clearly defined as decision-sets (or to be really picky: n-tuples of decision sets, but I don't think we're going to get into any trouble by being sloppy in usage on that count), I'd like to stick to my terminology for the moment for the simple reason that that at least allows us to keep track of what we're talking about.

[/ QUOTE ]

In my opinion you calling your functions strategies is a reason for readers to lose track.
Since you are introducing something you came up with yourself, as a better or nicer or whatever
approach to the problems, it can't be wize to use terms that in other people's minds have a
different meaning.

[/ QUOTE ]

I think they actually have the meaning I claim (although the intuitive use leads to some confusion between decision-sets and strategies), and I'm getting ready to attempt to prove that they don't have the meaning you claim.

[ QUOTE ]
[ QUOTE ]

Ok, in the last part of it, there's just one thing that I think needs to be spelled out. alpha* and beta* are supposed to BOTH be optimal relative to each other. Hence, there is no guarantee that 2 such decision-sets exist. That's the reason I go through all of that with the the equilibrium question.

[/ QUOTE ]

Optimal strategies as a solution to a game come as couples, hence they're defined as such.

[/ QUOTE ]

So, a well-optimal decision-set for A is only well-optimal relative to a decision-set for B. Or am I wrong? I definitely want to get complete clarity on what I need to prove before doing the work on this.
[ QUOTE ]
The function A_opt(returns) the set of optimal strategies against B's beta.
So maybe, to avoid confusion, I would better have called it A_best or something.

I'll have a look at your EV-example now.


In fact, if my calculations are correct on the game we talked about, I don't think they do (I'll double-check my math before presenting this as a refutation). Basically, on that game we defined certain decision-sets alpha and beta for A and B. If your definition of "optimal" corresponds to the one we were intuitively using, then alpha is a member of A_opt(beta) and beta is a member of B_opt(alpha).

I don't think beta is a member of B_opt(alpha). That's why I don't think your definition will work.

[/ QUOTE ]

[/ QUOTE ]
Reply With Quote
  #74  
Old 06-15-2004, 07:37 AM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: Setting up the problem

[ QUOTE ]

Let's call A's decision-set here alpha and B's decision-set beta.

Now, in order to show that these strategies are not well-optimal relative to one another, all I have to show is that there is some other decision-set beta' for B such that EV(alpha,beta) < EV(alpha,beta') where EV is considered here as EV for B. Hence, on the current decision-sets, we have an EV for B of -1/18.

[/ QUOTE ]

This is not entirely true.
It is because this beta could maximize B's EV given alpha, while {alpha,beta} not being an optimal
strategy-couple.
In this case, there is an alpha'... (Same as you did, but the other way around)
Reply With Quote
  #75  
Old 06-15-2004, 07:49 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Setting up the problem

Well, if a well-optimal strategy couple (alpha, beta) exists, then alpha is a member of your A_opt(beta) and beta is a member of your B_opt(alpha)--at least if I'm understanding you correctly.

I don't think your going to find any "strategy couple" that meets those criteria. So, basically your definition will define "optimal strategy" out of existence in some (many... most?) cases.
Reply With Quote
  #76  
Old 06-15-2004, 07:53 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Setting up the problem

By the way, I think my first attempt at defining an optimal strategy did the same thing: If you define them that way, they hardly ever even exist in interesting cases.

But I think the additional require that the decision-sets be in a state of equilibrium will resolve that problem. Without my concept of strategies (or something similar), I don't think it's going to be possible to define what an equilibrium is.
Reply With Quote
  #77  
Old 06-15-2004, 07:57 AM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Definition of Nash Equilibrium

DEFINITION: Nash Equilibrium

If there is a set of strategies with the property that no player can benefit by changing her strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute the Nash Equilibrium.
Reply With Quote
  #78  
Old 06-15-2004, 08:47 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Definition of Nash Equilibrium

That might actually be enough, although it seems a little strong.

By the way, I do get (being very careful about my calculation this time) that B loses exactly the same amount as on beta if he moves his call-threshold from 1/3 up to 1/2. The thing about moving it to 1/2 is that alpha and beta' will no longer be in a state of Nash equilibrium, since at that call-threshold, A can indeed change strategies/decision-sets to get better results (presumably tighter raising criteria). Also, while beta' will be a member of your set B_opt(alpha), alpha will presumably NOT be a member of your set A_opt(beta').

I still don't see what you mean by (well-)optimal strategy couples alpha and beta unless you mean that alpha is a member of A_opt(beta) and beta is a member of B_opt(alpha).

That definition seems pretty close or identical to the Nash idea anyway.

Anyhow, if the strategies/decision-sets that we came up with are in a Nash equilibrium, then it should be true that beta is a member of B_opt(alpha) and that alpha is a member of A_opt(beta). Right?

So, is that what your definition of what an optimal strategy couple is? I'm still not completely clear as to what your thinking is on this.
Reply With Quote
  #79  
Old 06-15-2004, 10:02 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Setting up the problem

[ QUOTE ]
[ QUOTE ]

Let's call A's decision-set here alpha and B's decision-set beta.

Now, in order to show that these strategies are not well-optimal relative to one another, all I have to show is that there is some other decision-set beta' for B such that EV(alpha,beta) < EV(alpha,beta') where EV is considered here as EV for B. Hence, on the current decision-sets, we have an EV for B of -1/18.

[/ QUOTE ]

This is not entirely true.
It is because this beta could maximize B's EV given alpha, while {alpha,beta} not being an optimal
strategy-couple.
In this case, there is an alpha'... (Same as you did, but the other way around)

[/ QUOTE ]

Here's what's confusing me. In a previous post, you said:

"Now, for alpha* and beta* to be optimal, the following two statements need to be true:

alpha* is an element of A_opt(beta*), and
beta* is an alement of B_opt(alpha*)."

Now, you are saying that these statements don't have to be true. Which is it?

Obviously, if EV(alpha, beta) < EV(alpha,beta') then beta is not a member of B_opt(alpha).

These existence problems may be resolvable (as Nash equilibrium would suggest), in which case we really DON'T need my "strategy" as distinct from "decision-sets." But the point at this stage is simply that that is a VERY STRONG statement regarding a decision-set for any one player.

That decision-set has to be COMPLETELY IMMUNE to any change in the decision-sets on the part of the other player! If the decision-sets we're talking about here really do have that property, then great! We're done. The reason that I am looking to soften this definition is that I fear that we're going to run into existence problems sometimes if we define optimal strategies in that way.

By the way, I think I may have indeed come up with an improvement for B that will work in our game. But I'll double- and triple-check my math before posting. Also, I'm still waiting on clarification regarding the above. The fact that it's extremely difficult to beat our solution on the various schemes I've tried does seem to suggest that the decision-sets/strategies may indeed be in a Nash equilibrium (which as I see it does boil down to the same criteria you give). But I may have found one that would disprove that...
Reply With Quote
  #80  
Old 06-15-2004, 01:09 PM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Definition of Nash Equilibrium

[ QUOTE ]
DEFINITION: Nash Equilibrium

If there is a set of strategies with the property that no player can benefit by changing her strategy while the other players keep their strategies unchanged, then that set of strategies and the corresponding payoffs constitute the Nash Equilibrium.

[/ QUOTE ]

I'd just like to make sure that Nash equilibrium is in fact identical with your definition.

I'm assuming that the term "strategy" here is used for what I'm calling a "decision-sets." Now, applied to a 2-player game like the one we are dealing with, we have 2 "strategies," which we can in well's terminology label alpha and beta.

So, if alpha and beta are both such that A cannot benefit from changing alpha as long as beta remains constant, and B cannot benefit from changing beta as long as alpha remains constant, doesn't that mean precisely (in well's terminology):

alpha is a member of A_opt(beta) and
beta is a member of B_opt(alpha)?

If that is the case (please let me know if I have misinterpreted either well or the Nash equilibrium), is it at least clear why I'm worried that the definition may be too strong, hence often yielding no strategies at all that are in a Nash equilibrium? I still strongly suspect that this is already the case in some of the games we have dealt with here: namely, that there are no 2 "strategies" (=decision sets) alpha and beta for A and B such that they are in a Nash equilibrium (i.e., are both "optimal" relative to one another on well's definition).

I also hope it is clear that my definition of "optimal strategy" will automatically cover all the cases we've had, and that it is a weaker definition than that of well or of the Nash equilibrium.

The reason is simply this: If we have alpha and beta such that alpha is a member of A_opt(beta) and beta is a member of B_opt(alpha), then all A has to do is just choose the strategy function defined by alpha without any regard for what decision-sets B is using (since beta is a member of B_opt(alpha), there's no way B can do any better at all against alpha). He can play according to alpha ALWAYS. And that "constant" strategy-function will always form an optimal strategy-function according to my definition (or at least it will be at the obvious equilibrium points a, b, c, and d). The same will apply to B with beta. If A or B changes strategies away from those that are in a Nash equilibrium (as I understand it), then the absolute best that each can do by changing decision sets is to get an EV with the same result (co-optimal solutions).
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 08:41 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.