PDA

View Full Version : Graduate Econometrics paper on NLHE


A_PLUS
11-29-2004, 11:20 AM
I am currently working for my PHD in economics. I am beginning to think about possible topics for my Dissertation. I have been reading some work that has been done on poker in some econ department literature. Of course, I am going to explore this option to the fullest.
I have had the very basics of game theory, but do not really get into it until next semester, so for now I am concentrating on a econometrics/stat paper.

The general topic of the paper is this.
Can we estimate with an significance, the fold equity of a particular situation?

I want to use low-mid stake NL (25/50/100??) as an example.
For starters, I will look at particular situations (i.e. last to act on a round, and the action is checked to you, what % of the time will a pot sized raise buy the pot). I plan to incorporate variables such as: table type (LAG, etc), number of players in round, actions in prior rounds, PT aggression of each player, Value of board (scare potential), etc.

Anyone suggest any regression techniques that I should consider? I am considering Logistic regression, but more out of default than sound theory. My dependent variable is going to be binary representing the success of a bet in ending a hand.

mannika
11-29-2004, 12:51 PM
My only concern for you is that it is going to take an extremely long time in order to get a statistically sound model (i.e., run it enough times to account for other variables (other players, cards flopping, etc). Normally this may not be a problem, however, if you plan on testing this out with real money, this could end up costing you a LOT of money (if you don't do this, it probably takes away from the accuracy of your results).

A_PLUS
11-29-2004, 01:22 PM
I realize that to have a model that takes every factor into account would be extremely difficult and time consuming. Just from looking at the work done on the Loki / Poki bot shows this. I am looking for a more limited, yet I believe valuable tool.

Here is what I am hoping to accomplish.
You can always calculate what you view your pot equity to be. In a real game, this may not be done with any precision, but you guestimate a %. Also, when making a bluff we guesstimate the folding equity of our hand based on the situation. For example, you might think to yourself, "these players are pretty tight, and I think someone would have raised the flop with a 2 flush on the board if they hit their hand, I'm going for the steal"

I want to quantify these simple situations. I know I can get a lot of examples of checked flops where the last to act bets. I am only planning on modeling that situation for this paper.

Also, the dependednt variables are pretty limited. Basically the player specific data on the players in the hand, and the cards of the flop.

I know this is a very simplistic model, it is just a start, to see how a poker study would go. I hope to progressively add to the analysis as I go on.

biggiemcg
11-29-2004, 04:54 PM
This sounds interesting. I'm interested to know how you plan to collect data. The response of other players will be largely dependent on their impression of the player making the bet. You could incorporate this into your model by playing extensively under several different usernames and only beginning data collection once you believe that you have established a reputation. You could also have several of your buddies with different playing styles collect data. Either method has flaws.

Another thought would be that a long series of dummies for board value would probably work much better than any continuous measure of scare value. Maybe paired board, 2-flush, Axx rainbow, etc., etc. could all be dummy variables. You could also create dummies for the size of your bet (maybe five ranges relative to the size of the pot) and dummies describing the pre-flop action. Needless to say you will need to introduce many interaction terms to correctly capture the decision making process. A further implication is that you will need to gather a TON of data because some of the interactions could be fairly rare.

One last point, whenever I've created econometric probability models, I've always found it useful to specify the model as a linear probability model (OLS) first. The results are much easier to interpret intuitively and the results will only differ greatly from logit results if the range of predicted fold likelihoods are fairly spread out. This last qualification will probably be a problem if you decide to include varied bet sizes on your part, so a logit model will probably be your best bet for your final analysis. You also avoid the need to worry about heteroskedasticity problems. Iím interested to hear what you think.

A_PLUS
11-30-2004, 02:21 AM
First off, thanks for the feedback, it is greatly appreciated.

[ QUOTE ]
The response of other players will be largely dependent on their impression of the player making the bet.

[/ QUOTE ]

I have thought about this a good deal, for my first run of the analysis, this is my thought. I chose low stakes, to try to avoid the most perceptive players and pokertracker as much as possible. Also, the analysis is not going to be based soley on my play, even I do not play that much. I will take each time my 'flagged' situation comes up and use it as a data point. I agree that table image is important, so I plan to use PT aggression prior to that hand, as well as tightness in prior 10/20 hands at the table as proxies for image. Once I get an operable model, I will do a live test with an unused username, and use only my current play to apx. my table image. Thoughts?

[ QUOTE ]

Another thought would be that a long series of dummies for board value would probably work much better than any continuous measure of scare value. Maybe paired board, 2-flush, Axx rainbow, etc., etc. could all be dummy variables.


[/ QUOTE ]

Great idea, much more valuable than a continous scale.


[ QUOTE ]

You could also create dummies for the size of your bet (maybe five ranges relative to the size of the pot) and dummies describing the pre-flop action.


[/ QUOTE ]

I was planning to have scaled variable representing size of the attempted bluff, i.e. 1=<50% of pot, 2= 1/2 pot, 3=pot, 4=overbet. I was going to take into account pre-flop action, but only to a limited degree originally. i.e. limped, raised, reraised.

[ QUOTE ]

Needless to say you will need to introduce many interaction terms to correctly capture the decision making process. A further implication is that you will need to gather a TON of data because some of the interactions could be fairly rare.


[/ QUOTE ]

I think for my initial analysis, the interaction effects will be kept to a minimum. I will include variables, such as previous bluff attempts, times shown a loser, etc. But for the time being I will be keeping it rather basic.

Point taken about OLS for the first run, I oly worried about the fact taht I am basically modelling the % of time that a bluff is successful, so OLS will not be [0,1].

As for the data, I will be using PT to store as many hands as I can, if anyone has any large DB, I would be obligued. I will be bringing the data into excel from the DB, and probably using RATS, or MATLAB for the regressions. I think I should be able to get a few thousand situations where, the last person to act bets the flop. I am not concerned as of yet, whether or not they were actually bluffing, just whether or not they were called.


Thanks again