Two Plus Two Older Archives  

Go Back   Two Plus Two Older Archives > General Poker Discussion > Poker Theory
FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #61  
Old 06-15-2004, 04:23 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Optimal strategies

Hey, Jerrod! Glad you chimed in, and looking forward to your book with Chen on these things (which perhaps will obviate some of my theoretical rantings...)!

I agree with you here, but I suspect there is some confusion in this issue (at least for me) due to mix-ups between STRATEGIES (a function mapping any decision-sets of your opponents into decision-sets for you) and decision-sets.

It's easy to think of a strategy for A as just an n-tuple of decision-sets for various circumstances. But I really don't think decision-sets are what we are really talking about on optimal and suboptimal, even though it's pretty easy to talk that way in practice.

I think a strategy has to be viewed as a rule by which one adapts one's decision-sets to the currently operative decision-sets (NOT strategies--if you define it that way, you end up getting into an irresolvable mess of infinite regresses, it seems to me) of one's opponent.

What this is all leading up to is what it really is to claim (or refute) that a strategy is optimal. I really think you get into trouble if you set "strategy" as simply identical with "decision-set."
Reply With Quote
  #62  
Old 06-15-2004, 04:36 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Challenge to well (or anyone else)

Ok, I've gotten in pretty deep on these definitions, and I don't know for sure whether you're with me or not in accepting these distinctions as having any value at all.

The basic question here is: Can we define "optimal" and "suboptimal" in terms of decision-sets alone?

If we can (and I originally thought that that was what you were saying), then all the definitional baggage I'm bringing in with the introduction of "strategies" over and above decision sets is completely unnecessary.

So, if you don't think there's any need for what I'm calling a "strategy," my challenge is simply this: Define in precise mathematical terms what it is for a decision-set to be optimal.

I think what will happen on any such definition is this: Many games will have no optimal decision-sets even though we are all agreed (more or less intuitively) that there is an optimal strategy.

Hence, if you care to meet my challenge, that means that there is a built-in challenge for me (and the obvious choice would be the game we've been analysing--I hope my math is right on that one, because then I'd be done in meeting my challenge): Construct a game where we intuitively speak of "optimal strategies" (we're agreed on the intuitively defined "optimal strategy" in the game we've worked on, as well as both of David's games) but where there is no optimal decision-set.
Reply With Quote
  #63  
Old 06-15-2004, 05:00 AM
Jerrod Ankenman Jerrod Ankenman is offline
Member
 
Join Date: Jun 2004
Posts: 40
Default Re: Optimal strategies

[ QUOTE ]
Hey, Jerrod! Glad you chimed in, and looking forward to your book with Chen on these things (which perhaps will obviate some of my theoretical rantings...)!

I agree with you here, but I suspect there is some confusion in this issue (at least for me) due to mix-ups between STRATEGIES (a function mapping any decision-sets of your opponents into decision-sets for you) and decision-sets.

[/ QUOTE ]

Ok, well, you can't just define strategy to mean something different. A strategy is a set of instructions on how to play the game from any point; the opponent's instructions are irrelevant except that they may cause certain information sets to occur instead of others.


[ QUOTE ]
It's easy to think of a strategy for A as just an n-tuple of decision-sets for various circumstances. But I really don't think decision-sets are what we are really talking about on optimal and suboptimal, even though it's pretty easy to talk that way in practice.

I think a strategy has to be viewed as a rule by which one adapts one's decision-sets to the currently operative decision-sets (NOT strategies--if you define it that way, you end up getting into an irresolvable mess of infinite regresses, it seems to me) of one's opponent.

[/ QUOTE ]

If you need to know something about your opponent's play to call a strategy optimal, you're not understanding the term.

Jerrod
Reply With Quote
  #64  
Old 06-15-2004, 05:34 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Optimal strategies (improved version)

Well, Jerrod is probably going to kill me for this one (although I actually don't think I'm contradicting his meaning, but the wording is likely to sound that way), but I think the foregoing definition of an optimal strategy is too strong. Using it, I suspect we won't get any optimal strategies in some cases where we want to have them.

First, I should note that, after my further details on the EVa function, the definitions still do hold up (assuming that the double-integral EVa is defined for the given decision-sets). Since a strategy-function for A determines his decision-sets based on those of B, then we actually can write EVa(f,c,d) because the decision-sets c and d, together with the strategy-function f, will determine all 4 decision-sets in this game, hence the Na function for all possible hands, and hence the double-integral which is EVa (if it is well-defined, as it will be on any tenable real-life strategy).

Before proceeding, it is also worth noting that we normally don't specify full-fledged STRATEGIES for A and B. The reason is that the domains of the strategy-functions are just too large (direct products of subsets of [0,1]). It's not hard to specify decision-sets, however. And we can also specify partial strategies over relatively normal decision-sets for an opponent. In the abnormal ones, the EVa function is also not even going to be defined, so we don't even have to worry about optimal and suboptimal. Anyhow, when we specify a part of a strategy, such as by using maximization of EVa to actually FIND (and define) A's optimal strategy constructively in all cases where EVa is definable (then we can just assume ANY manner of coming up with decision-sets for the undefinable cases, and hence "complete" A's strategy), we actually do have an implicit way of calculating decision-sets for A given decision-sets for B, so, aside from the arbitrary part of it where EVa is undefined, we actually have given a method for coming up with a complete strategy. It's just that we usually only actually spell out a small part of it.

Ok, now here's my new and improved definition of an optimal strategy:

Given a strategy-function g for B (this is the part where Jerrod will kill me! more on that later), a strategy-function f is optimal for A iff:

For all equilibrium decision-sets a, b, c and d on f and g (equilibrium is only defined in terms of f and g) and any other strategy-function h for A such that a, b, c and d are also equilibrium decision-sets on h and g, we have:

EVa(f,c,d) >= EVa(h,c,d)

What Jerrod is not going to like is that this does seem to make the definition of "optimal" RELATIVE to an opponent's strategy-function. It does actually do this. The reason is that you have to have some guarantee that the decision-sets involved are in an equilbrium. There is no way to do that without bringing in your opponent's strategy-function.

BUT (!!!), I'm guessing that it is TRUE (although this would be very tough to prove) that:

Aisthesis' Theorem:
If f is an optimal strategy-function for A relative to B's strategy-function g, then f can be "improved" to a strategy-function f$ such that f$ will be an optimal strategy-function relative to ANY strategy-function for B on which EVa is well-defined on all equilibrium sets.

So, basically, the relative definition of optimal is just a step used in order to get a guarantee that we're dealing with sets that are in equilibrium. If Aisthesis' Theorem is true (anyone here have enough analysis to take any kind of stab at that one?? Unless it's a lot easier than it looks, I doubt I could have done it even when I was somewhat on top of such problems), then there normally will be some strategy-function f that will work over the full-range of strategies for B.

Hopefully that will satisy Jerrod. In any case, I think one does have to restrict maximization of EV somehow to equilibrium points.

A suboptimal strategy for A relative to B's strategy-function g should also be obvious: f is a suboptimal strategy-function relative to g iff for equilibrium decision-sets a, b, c, and d, there is another strategy-function h (for A) such that a, b, c, and d are also equilibrium points AND EVa(h,c,d) > EVa(f,c,d).

That should pretty much complete my very abstract attempt at some definitions relevant at least to this part of game-theory--pending objections from well, Jerrod or others.
Reply With Quote
  #65  
Old 06-15-2004, 05:37 AM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: Second remark

It is that you sais " so the EV is properly defined ".
I figured yuo had to assume the things I wrote.
Reply With Quote
  #66  
Old 06-15-2004, 05:49 AM
well well is offline
Junior Member
 
Join Date: May 2003
Posts: 25
Default Re: Challenge to well (or anyone else)

Now, let's use strategy for what most people use it for. (Not saying that what you did can't be going
to be interesting, but at least it's confusing when we mix thing up.)

From what I understand of optimal strategies, I already wrote in a previous post what I think a
proper mathematical definition could be.

Let alpha and beta fixed strategies, i.e. sets of dicision-points, or intervals or whatever.
Alpha for A, and B has beta.

Let EV(alpha,beta) denote A's EV given both strategies.

Let A_opt(beta) be the function that sends beta to the set of A-strategies with maximum EV for A, in
other words:
if alpha* is an element of A_opt(beta), then for all strategies alpha holds the following inequality:
EV(alpha,beta) <= EV(alpha*,beta)

B_opt(alpha) will be defined analogously, with the inequality
EV(alpha,beta) >= EV(alpha,beta*)

Now, for alpha* and beta* to be optimal, the following two statements need to be true:

alpha* is an element of A_opt(beta*), and
beta* is an alement of B_opt(alpha*).

That's about how I see it, I guess.
Reply With Quote
  #67  
Old 06-15-2004, 05:52 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Optimal strategies

[ QUOTE ]
[ QUOTE ]
Hey, Jerrod! Glad you chimed in, and looking forward to your book with Chen on these things (which perhaps will obviate some of my theoretical rantings...)!

I agree with you here, but I suspect there is some confusion in this issue (at least for me) due to mix-ups between STRATEGIES (a function mapping any decision-sets of your opponents into decision-sets for you) and decision-sets.

[/ QUOTE ]

Ok, well, you can't just define strategy to mean something different. A strategy is a set of instructions on how to play the game from any point; the opponent's instructions are irrelevant except that they may cause certain information sets to occur instead of others.

[/ QUOTE ]

"Instructions" isn't a mathematically precise term--unless it boils down either to what I'm calling a decision-set or what I'm calling a strategy. Actually, the second part of your remark, namely "except that they may cause certain information sets to occur instead of others," is the whole reason why I want to distinguish a strategy from a decision-set.

I really don't think the only "strategies" (on our intuitive understanding) simply have the form: If I have this hand and the betting comes to me in this way, I will always do this REGARDLESS. I think a strategy needs to include the ability to adjust to an opponent's play.

On my definitions, any ordered pair of decision-sets for A in this game defines one "constant strategy" but it can be a part of many, many, variable strategies.


[ QUOTE ]
It's easy to think of a strategy for A as just an n-tuple of decision-sets for various circumstances. But I really don't think decision-sets are what we are really talking about on optimal and suboptimal, even though it's pretty easy to talk that way in practice.

I think a strategy has to be viewed as a rule by which one adapts one's decision-sets to the currently operative decision-sets (NOT strategies--if you define it that way, you end up getting into an irresolvable mess of infinite regresses, it seems to me) of one's opponent.

[/ QUOTE ]

If you need to know something about your opponent's play to call a strategy optimal, you're not understanding the term.

Jerrod

[/ QUOTE ]

Well, the question is how to deal with "are irrelevant except that they may cause certain information sets to occur instead of others"? Are the opponent's instructions relevant or not? If they're not relevant, there are no exceptions.
Reply With Quote
  #68  
Old 06-15-2004, 06:00 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Second remark

[ QUOTE ]


[ QUOTE ]

Next step: Given the strategies f (for A) and g (for B), let's define an EQUILIBRIUM as any set of 4 subsets of [0,1] a, b, c and d such that f(c,d) = (a,b) and g(a,b) = (c,d).

[...]

We now have an equilibrium of 4 sets such that f(c,d) = (a,b) and g(a,b) = (c,d). So, the decision-sets are now well-defined and stable from the standpoint of the strategies of both players.

[...]

Hence, EVa and EVb are also both well-defined.


[/ QUOTE ]

You made a serious mistake here!
You have only defined an equilibrium, and after that just assumed that there always exactly one,

[/ QUOTE ]

Where did I assume that there is always exactly one? That's the part I was defending myself on. I agree that this was not an adequate definition of EVa.

[ QUOTE ]
or - if there were more - they would result in the same EVa and EVb.

If you really would like to construct everything carefully, this may not be left out!

[/ QUOTE ]
Reply With Quote
  #69  
Old 06-15-2004, 06:35 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Re: Challenge to well (or anyone else)

I think this does give a precise definition of optimal in terms of decision-sets.

However, I don't think that's the way most people are using the term "strategy." Since, on your scheme, you don't even deal with the things I call "strategies," but on my scheme, the things you call "strategies" are clearly defined as decision-sets (or to be really picky: n-tuples of decision sets, but I don't think we're going to get into any trouble by being sloppy in usage on that count), I'd like to stick to my terminology for the moment for the simple reason that that at least allows us to keep track of what we're talking about. If one can get by without the functions that I'm calling strategies, then I'm happy to revert to just forgetting about those and calling a "decision-set" a strategy. For the moment, my terminology at least has the advantage of keeping track of the different things we're talking about. If it turns out that the introduction of my "strategies" has no value, then we can reconsider, and I'll be quite willing to adopt your terminology.

[ QUOTE ]
Now, let's use strategy for what most people use it for. (Not saying that what you did can't be going
to be interesting, but at least it's confusing when we mix thing up.)

From what I understand of optimal strategies, I already wrote in a previous post what I think a
proper mathematical definition could be.

Let alpha and beta fixed strategies, i.e. sets of dicision-points, or intervals or whatever.
Alpha for A, and B has beta.

Let EV(alpha,beta) denote A's EV given both strategies.

Let A_opt(beta) be the function that sends beta to the set of A-strategies with maximum EV for A, in
other words:
if alpha* is an element of A_opt(beta), then for all strategies alpha holds the following inequality:
EV(alpha,beta) <= EV(alpha*,beta)

[/ QUOTE ]

Ok, now I understand what you meant. I note that "optimal" is defined only relative to the ordered pair of decision-sets (beta) for B. The "optimum" function A_opt simply defines the set of ALL co-optimal decision-set ordered pairs for A on which A has maximum EV. I'm with you to here.

[ QUOTE ]
B_opt(alpha) will be defined analogously, with the inequality
EV(alpha,beta) >= EV(alpha,beta*)

Now, for alpha* and beta* to be optimal, the following two statements need to be true:

alpha* is an element of A_opt(beta*), and
beta* is an alement of B_opt(alpha*).

That's about how I see it, I guess.

[/ QUOTE ]

Ok, in the last part of it, there's just one thing that I think needs to be spelled out. alpha* and beta* are supposed to BOTH be optimal relative to each other. Hence, there is no guarantee that 2 such decision-sets exist. That's the reason I go through all of that with the the equilibrium question.

In fact, if my calculations are correct on the game we talked about, I don't think they do (I'll double-check my math before presenting this as a refutation). Basically, on that game we defined certain decision-sets alpha and beta for A and B. If your definition of "optimal" corresponds to the one we were intuitively using, then alpha is a member of A_opt(beta) and beta is a member of B_opt(alpha).

I don't think beta is a member of B_opt(alpha). That's why I don't think your definition will work.
Reply With Quote
  #70  
Old 06-15-2004, 06:46 AM
Aisthesis Aisthesis is offline
Junior Member
 
Join Date: Nov 2003
Posts: 5
Default Suboptimal strategies

One final note on defining an "optimal strategy" in this way. The basic question is: What does it take to prove that a strategy is not optimal?

Let's say we have 2 (in practice usually only partially defined--but we are hopefully capable of expanding these partially-defined strategies into complete strategy-functions) strategy-functions f and g for A and B.

It is a proof that f is suboptimal relative to g if:

You can find

1) equilibrium decision-sets a, b, c, and d over f and g and
2) another strategy-function h (for A) such that a, b, c, and d are also equilibrium-sets over h and g. And:
3) EVa(h,c,d) > EVa(f,c,d)
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -4. The time now is 05:03 PM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.