Thanks for the clips, I can't normally watch videos, but we're kinda slow today. They were interesting.If you want to just see someone play the game out, there's plenty of clips for that. I saw this problem worked out in this bbc documentary a few years back, for example.
Edit* I found the episode.
http://www.watch-tvseries.net/series...of-Mythfortune
The first half is tested first, the second half is tested at around 13 minutes in.
Exactly. This is why I ran 10,000 iterations in my simulation. You have to have a statistically significant sample to get proper results. I'm sure that is a larger sample than necessary, but it also runs nearly instantly on my computer, so 0 fucks given on oversampling.You can run a simulator for a few thousand iterations and it'll get to 33/66. It is the nature of statistics that if you have a relatively small sample size it won't necessarily be exactly what you'd expect. After all if you ran the test once you'd have a 100% chance either way, based off that one data point. Statistics has a whole terminology and methodology for this.
At the very end? Switch, if you are approaching it as a larger Monty Hall problem. The correct answer to a multi-stage Monty Hall problem is to stay until the last choice, then switch. But that assumes several things, up to and including that you didn't open the million dollar case already (which is not possible in the Monty Hall problem, you can't accidentally get rid of the top prize and keep playing). Also the bank periodically makes offers to buy you out... but, and this is key, the bank knows which cases are unopened (obviously, so do you) but itdoes notknow what is in a given case. It has no more idea where the million dollars is than you do. So the "host knows where the car is" part of Monty Hall does not apply to the bank's offers, it is going solely off the remaining total potential value in the cases. Which makes it like the "Ignorant Monty Hall problem" where Monty can reveal the car instead of a goat and screw you.Enough of this remedial shit. What I really want to know is should you switch your case in deal or no deal at the end?
I love this show. They don't always show the science, because sometimes the question being asked doesn't really implicate anything "scien-cy". And every once in awhile, there is just a myth they test that is absolutely mind-blowing.Yeah their show is unwatchable for me. The last one I saw was the breaking bad episode and they covered so little content over the duration of the show I had to fast forward it. And there's no real scientific discussion, it's just lowest common denominator gruel.
BS in Eletrical Engineering, and not that I recall.Hoss, do you have a degree? What level and what field? And more importantly, did you ever have to take a statistics class?
Twins! I definitely took a stat class, though. With a decade past, it's a little hazy.BS in Eletrical Engineering
Good question. I would have to dig deeper to do the math for significant sample sizes. I just ran so many iterations that it would unquestionably be "enough". Were I sitting at my work computer, I could run more or less iterations and see how the outcomes were affected. The two live tests don't have enough samples to arrive at the theoretical values, but they are useful in showing the situation favors switching.How many iterations does take to get recognizable results? Does that mean those 2 tests are completely worthless?
The simulated procedure is: put the car behind a random door and (separately) choose a random door. If Switch? is on, a goat door is revealed, then the unrevealed door is selected. Otherwise, the original choice is held.Did your simulator pick randomly? If not, it was less than worthless.
Yup. It's fine to exercise with more complexity than is needed for fun, but one of the foundations of problem solving is to reduce the problem to its most basic components. Worrying about sample sizes for this problem is not doing that.The focus on empirical data is really odd. It's not like there are variables that are either uncontrollable or unaccountable. It's basic math. You can use a simulation or do the problem for yourself, but to delineate out theory and practice on a simple stats problem is asinine. What you're essentially saying is you think the foundations of math are wrong on a fundamental level, and once you're arguing that then I don't know what anyone can say.
No, I'm saying that if the theoretical results don't pan out in the real world, then the whizzards of smart must have missed something. They are human, after all. Any theory that can't pass a practical test is a bad theory. You'd have to be a simpleton to trust what's written out on paper over actual results.The focus on empirical data is really odd. It's not like there are variables that are either uncontrollable or unaccountable. It's basic math. You can use a simulation or do the problem for yourself, but to delineate out theory and practice on a simple stats problem is asinine. What you're essentially saying is you think the foundations of math are wrong on a fundamental level, and once you're arguing that then I don't know what anyone can say.
The theory does match the empirical results. You're just too suborn to see it. Your problem here is that you think 20, or 100, is a definitive sample size so you conclude that the theory doesn't match the empirical results, when it in fact, does. A coin flip has a 50/50 pecent chance of being heads, but that is not guaranteed over 1 flip, 10 flips, or even 100 flips. If you flip the coin 100 times and you get 53 heads and 47 tails, that doesn't mean that the theory doesn't match with empirical results. Given enough samples, any probability test will return to its true averages. That's why people play simulations of a million tests, to smooth out the variations. No one is going to sit around and play the monty hall game a million times.No, I'm saying that if the theoretical results don't pan out in the real world, then the whizzards of smart must have missed something. They are human, after all. Any theory that can't pass a practical test is a bad theory. You'd have to be a simpleton to trust what's written out on paper over actual results.