Book Review: The Essential Guide to Effect Sizes

Before moving into psychology, I had never encountered the term “effect size”, which is a standardized unitless way to report the effect of an intervention or treatment. This means that you can say “woah, that’s a big effect” and everyone knows what you mean whether or not they know the ins and outs of your particular research area. Being totally ignorant of this sort of thing I did what I always do: bought a book!

Paul D. Ellis’s The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results is 150 pages of pragmatic and fluid advice on the use of effect sizes in the literature. Unlike other texts Ellis is careful to keep everything as generally applicable as possible, which helps when you’re a statistician and too many explicit references to things like psychology or the social sciences can actually end up being more confusing because you don’t understand the context. That said, he does employ plenty of examples but these come from wide-ranging sources and the context is always fully explained.

The book has three sections: effect sizes, statistical power and meta-analysis. Effect sizes and meta-analysis naturally go together as meta-analysis combines multiple studies to get a better estimate of effect size and statistical significance. The inclusion of a section on power analysis is undeniably useful but it is generally concerned with making sure you can detect the effect you are looking for, i.e. you are likely to get a statistically significant result if the effect is real and it is the same as what you predict (or larger). You don’t need the observed effect to be in the form of an effect size to do a power analysis, but by including them Ellis provides a full description of good research practices for reporting and interpreting results. In this case the subtitle is more evocative of the book as a whole than the title. Effect sizes are mostly dealt with in the first chapter, the second being more concerned with interpreting the effect’s practical significance.

The section on power analysis also provides a neat breakdown of the statistical significance tests and error types. More importantly it clearly and simply describes the fact that each error is conditional on whether or not there is actually an effect. Forgetting this conditionality is the root of almost all misconceptions about significance testing so I applaud the statistical accuracy of this book. Here Ellis explains everything clearly but avoids common missteps that happen when people try to explain statistics using words rather than equations.  Though I do find his references to “plain English guidelines” for statistical theory irritating. It is impossible to divorce statistics from maths and this book has an unfortunate tendency to move the statistical details to footnotes, worst of all these footnotes are collected at the end of each chapter rather than on the page. This is pretty irritating for the mathematically-minded who have to constantly flip back and forth.

The only other point against it is the price, over £25 for 173 pages! That’s including the beefy bibliography, which is why I excluded it in the earlier page count. I don’t know why academic texts are so expensive but I wince every time I buy a new book. As usual I would definitely recommend going to your library or asking your supervisor if they could lend you something in the first instance. I like and enjoy books anyway and I prefer having a reference I have easy access to, but not everyone wants to prioritise academic books in that way.

The Essential Guide to Effect Sizes puts itself in direct opposition to researchers who simply report statistical significance and call it a day (which I would like to point out is also a position that, to my knowledge, has never been advocated by actual statisticians).  It is undeniable that this book provides an accessible guide to researchers who want to change this practice and Ellis justifies his position well. He also avoids the temptation to over-simplify: he clearly thinks meta-analysis is useful but he outlines possible draw backs. Despite being a statistician, I also found this book incredibly useful. Meta-analysis and effect sizes were not included in my undergraduate curriculum. More importantly it offers some insight to the world of psychology and social science research: what are their challenges and goals? In some ways being a statistician is about inserting yourself into the research culture of whatever you happen to be working on and using texts aimed at researchers in your field can facilitate that in my opinion.

Succinct, well-researched and clear, Ellis has provided an excellent resource for anyone trying to break away from reporting p-values devoid of context, or the curious statistician approaching the subject of effect sizes for the first time.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s