Academically Adrift was published last month. The link takes you to the U of Chicago Press but you can Google "academically adrift" and find many articles about it. (Disclaimer: I have not read the book itself and I do not get a commission if you buy it.)
A key component seems to be the authors' finding that roughly 36% of all college students fail to show improvement on a particular learning assessment, the CLA. The street-level interpretation? College doesn't work.
As expected, some in the higher education world are upset by these findings and the study has been attacked on several fronts. However, it seems to me that a key statistical understanding is missing.
For the sake of discussion, let's take the findings at face value. Let's assume that the CLA is a good test and let's assume that the data is representative of all college students. Is the street-level interpretation fair? Maybe, maybe not.
We don't know what would have happened to a similar group of non-students. What would the CLA show for a group who entered the work force right after high school and never attended college? If there were the same 36% - 64% split then college would make no difference in learning.
But what if the non-college split was 10% - 90%. Then the college student results would be a evidence that college works well. What if the non-college split went the other way and 90% of non-college students improved. That would be evidence that college actually hurts.
Even without considering an alternate test group, it's also worth pointing out the "half empty/half full" issue. If 36% of college students show no improvement, then 64% evidently do show some improvement. If college increase scores for nearly 2/3 of students, it might not be fair to say college doesn't work.
What do you think of this study? How can the data be interpreted?
Also, is it fair to comment on the study based on this blog post or should you read more about Academically Adrift first? Should you read the entire book before you comment? The world is full of summaries about summaries. How far back toward the original data should we have to dig before we're allowed to draw conclusions?