Researchers are trained to look for the original methods whenever they read a new study, especially if the results are surprising. Learning how the study was done provides information that helps determine whether the science is sound and what to make of it.
The chocolate milk survey is described as a nationally representative survey of 1,000 American adults, but this is impossible to verify without seeing how respondents were selected. Likewise, how the survey was conducted – whether it was a phone or online survey, for instance – can have significant impacts on its accuracy. Research suggests that phone surveys may be less accurate than online surveys because they require people to give their responses out loud to another person instead of quietly clicking away in privacy.
For instance, someone who holds racist views may feel comfortable checking a box about it but might avoid openly professing those opinions on the phone to a stranger. It’s unlikely the chocolate milk survey ran into such problems, but depending on the questions asked, other challenges may have presented themselves.
Likewise, it’s difficult to interpret the results of the chocolate milk question without seeing how it was worded. Poorly phrased or confusing questions abound in survey research and complicate the process of interpreting findings.
An NPR interview with Jean Ragalie-Carr, president of the National Dairy Council, is the closest we can get to the actual wording of potential responses: “there was brown cows, or black-and-white cows, or they didn’t know.” But as Glendora Meikle of the Columbia Journalism Review points out, we don’t know if those were the only options presented to respondents.