#81
|
|||
|
|||
Terminology
Do you think it would create more or less confusion at this point if I started using the term "strategy" for what I have been calling an ordered pair (in the case of this game) of decision-sets? Then I could use the term "strategy-adaptation function" (I think you suggested something like that in an earlier post) for what I have up to now been calling a "strategy-function."
The only thing is that I'm not about to re-write all this stuff in those terms. So, I don't want to create additional confusion by switching terminology in mid-stream. If you think it's better to switch, I'll be glad to. But if I do, please bear this change in mind when discussing anything in my previous posts (which will all be in the old terminology). The main thing is that we have some consistent way of talking about the mathematical objects we're dealing with. We can call them "goons" and "spoons" for all I care, but it will make things much easier if we're both (or anyone else who joins in) using the same terms for the same objects. What I'm obviously driving at here is the question of how important these strategy-adaptation functions really are. And, if we adopt that terminology, I'm working my way up to trying to show that when we've loosely been speaking of "optimal strategies," we were probably actually talking about (partial specifications of) optimal strategy-adaptation functions. Whether or not that is the case is going to depend on the actual results we get when playing around with some different strategies in some of the games we've been examining. |
#82
|
|||
|
|||
Re: Setting up the problem
Nope. The two criteria are both needed and sufficient as I said. So if you want to prove that some strategy-couple is not optimal, you have to prove just one of both creteria to be wrong. You wrote: if its not an optimal strategy-couple, then the first statement has to be wrong. This is not true as I said, because the first one could actually be right, when the second is not. |
#83
|
|||
|
|||
Re: Setting up the problem
[ QUOTE ]
Let A_opt(beta) be the function that sends beta to the set of A-strategies with maximum EV for A, in other words: if alpha* is an element of A_opt(beta), then for all strategies alpha holds the following inequality: EV(alpha,beta) <= EV(alpha*,beta) B_opt(alpha) will be defined analogously, with the inequality EV(alpha,beta) >= EV(alpha,beta*) Now, for alpha* and beta* to be optimal, the following two statements need to be true: alpha* is an element of A_opt(beta*), and beta* is an alement of B_opt(alpha*). [/ QUOTE ] Suppose alpha and beta are a well-optimal strategy-couple. Then, according to this definition, specifically beta is an element of B_opt(alpha). Hence, for all strategies beta' the inequality (I'll use EVa just to emphasize that we're always talking about the EV from A's perspective) must hold: EVa(alpha,beta') >= EVa(alpha,beta) So, the existence of any beta' such that EVa(alpha,beta') < EVa(alpha,beta) means that alpha and beta are not a well-optimal strategy couple. Specifically, for such a beta', alpha may or may not be a well-optimal strategy for A against beta'. But you can't switch to alpha' here as answer to a refutation claim. The existence of a beta' such that EVa(alpha,beta') < EVa(alpha,beta) will always prove conclusively that alpha and beta are not a well-optimal strategy-couple. Similarly, of course, for the existence of an alpha' while holding beta constant. |
#84
|
|||
|
|||
Still incorrect
Well, what may have been throwing this thing off and getting us involved in this huge theoretical discussion (from which I've at least learned something) is that this is a faulty strategy for B.
If A keeps his strategy, we have: A: bluff-raises [0,1/6] calls a raise [1/3,1] value-raises [1/2,1] Then B can improve by basically playing the same way: B: bluff-raises [0,1/6] checks-folds [1/6,1/3] check-calls [1/3,1/2] value-raises [1/2,1] I get that B actually wins 1/9 with this strategy. So, B's strategy was not optimal in our previous solutions. I'm still not sure what A's best counter to this strategy is, though. More on that in another post. |
#85
|
|||
|
|||
nope
I get 1/18 (for A), with your counterstrategy, which is the same as in my optimum.
Check for errors, please? Or, if you agreed with my EV-formula, you can get my answer from there. Next Time. |
#86
|
|||
|
|||
Re: nope
Yes, you're right. Hmmmmm... But, then if the strategy works for A, why doesn't it do any good for B? (that's a bit of rhetorical question, which I'll try to figure out, but if you have an answer, please let me know!)
|
#87
|
|||
|
|||
Re: nope
[ QUOTE ]
Yes, you're right. Hmmmmm... But, then if the strategy works for A, why doesn't it do any good for B? (that's a bit of rhetorical question, which I'll try to figure out, but if you have an answer, please let me know!) [/ QUOTE ] I don'y fully understand your question. Could you please rephrase it? |
#88
|
|||
|
|||
Re: nope
This game is perfectly symmetrical except for the order in which players take action.
So, if B checks, A is in exactly the same position that B was previously regarding the decision of whether to check or raise. So, basically, if a strategy is optimal for one player, it should be optimal for the other player, too?? |
#89
|
|||
|
|||
Re: nope
It is not symmetrical, because player B already checked, hence it's not [0,1] anymore!
|
Thread Tools | |
Display Modes | |
|
|