1. Textbook - I used Dean and Voss Design and Analysis of Experiments. This is a very comprehensive text. For this purpose, it was overkill. It covers methods in depth. So much so that it went into areas that showed the weakness in the use of software packages. So the SAS code they have to give for examples is quite sophisticated. Minitab, as far as we could tell, was completely outclassed by the end of the course. I was doing everything in R. But I believe what my students decided that I was a better programmer then they were. For statistics majors, this is probably the right book. And I picked it because it gave perfect lead ins into some topics I wanted to learn. But it was too ambitious (and I was not even trying to cover the whole book. I was using a slightly modified version of their sample curricula) But the next time I teach this I would use Box and Hunter. Which has the advantage of having an associated R package for examples. One real good aspect of Dean and Voss are the datasets. They made a point of having real data sets in the examples and in the homework sets. So we were forced to deal with messy data.
2. Material - I was actually learning some of the material myself along the way, since I had not taken a true DoE course (only one focused on applications in simulation). So some of the lectures took me a long time to prepare. On the other hand, I found that I was developing a deeper understanding of statistical methods. In particular, I now think if multiple comparisons is an issue whenever using statistical inferences. When Tom Siegfried wrote his Science News editorial Odds Are, It's Wrong: Science fails to face the shortcomings of statistics, I was able to identify just what the misuse of statistics was for all of his points (except for the Bayesian, because I have not had much exposure to Bayesian methods.) So I learned alot about the underlying assumptions of a many statistical methods. And one of my students is actually a statistics Ph.D. student, so someone was around to keep me honest.
3. Software. The book uses SAS. I used R. My own graduate students started in Minitab and switched to R as the material in the book reached the limits of what Minitab could be made to do. The MBA stuck with Minitab and we had to live with its limits. I ended up getting much better at R and Sweave (because I forced myself to use it for everything, even when no package was found for a method). Even redoing Dean and Voss examples. I am now a big fan of list comprehensions and I think I can almost program in a functional style (as opposed to procedural and object style). I won't say I like R better then Python, but I am almost as fluent now.
4. Summary. This course took a lot of time. Partly because I was learning much of it as I went along. And I was also learning the tools as well (R and Sweave). It is gratifying to know that I actually taught something useful. The MBA student has started using the material at work (and can now go toe-to-toe with the Six Sigma Black Belts in his company when talking statistics). My PhD student incorporated some of the material into his proposal. He also keeps saying that he wish he took this class years ago (he said this after the last class I taught, which was basically created for him). And my statistics student seems to have learned a lot about working with real data. (Dean and Voss have a lot of real, and messy, data sets.) But in the future, I think I'm going to find textbooks where I can also get some solved problems, because this was a lot of work for a single class.
No comments:
Post a Comment