Evidence-based fitness is a great achievement of the last decades. As we started our fitness journey it was all about what we read in the mainstream fitness media and what we heard from other people who were training. Personal trainers were not used to read scientific publications or science reviews neither. Now as a good online coach and/or personal trainer you are expected to do this. I love science, that’s why I’ve done the PT course by Menno Henselmans, but this approach also has its limitations. In the fitness and health media landscape you can find conflicting evidence on almost every subject and often science is behind what expereinced trainees and coaches already may intuitively know. In this article I’ll explain how science is conducted, how it is possible to find proof for potentially everything and why science is still great.
One of the most elemental problems of science is the lack of funding, especially when it comes to fields like exercise and sports science whose outcomes are not immediately life-saving. Science is usually done by universities, who are notoriously on a tight budget. And research is expensive! Salaries have to be paid, materials have to be bought, instruments have to be maintained, software has to be updated. And the more in-depth research gets, the (exponentially) higher the costs skyrocket.
Lacking money is one of the main reasons why there is so much “sub-optimal” research done, that often delivers no straight answers but only leads to more questions. “More reasearch has to be done before we conclude anything for the general population” is a phrase you will often hear in this context and why often people who have been training and/or coaching for a long time are ahead of the scientific research.
Note on the side by Chantal, working in life science for over 10 years.
What does it really mean not having enough money for research? It means that you are not always using the optimal method or experiment condition but the one you can afford. It does mean you lose the knowledge of a collegue working on a certain topic because the money of his or her project is over. You can not publish in a journal with a great reach because the publication’s cost are too expensive or you do not get the latest research because it is hidden behin a pay-wall. Or you can not travel to a congress to exchange your knowledge with collegues because your project budget is thight. And so on and so forth.
I really respect my collegues from exercise and sports research as they have even less funding than life science research and often invest their private money and are truely passionate in what they are doing.
To get statistically significant results, you need a sufficiently large sample size of test subjects. Bad research or pseudo-science takes small sample sizes and tries to sell you differences between groups that they found as sound evidence. But those results could be pure chance! You can find this “data” in mainstream fitness media and it is often used in those shady Netflix documentaries that are so popular right now (yes, I’m looking at you “Game Changer”).
Good research tries to get as many test subjects as possible to minimize the chances of false positives. As you can imagine, big sample sizes lead to rising costs, making funding, once again, a problem. If a significant sample size (the good statistician under our readers know how to calculate it!) can not be reached either it is clearly stated somewehere in the paper or it is not even published and it is just a pilot study. There are also so-called “case studies” where particular cases of scientific relevance are presented to the scientific community. Even if you can not extrapolate theories or guidelines for the general population, these cases are interesting and can be a starting point for further studies.
Test Subject Relevance
Related to sample size is the problem that you need the matching type of test subjects. If you want to find out how much training volume is ideal for pro bodybuilders, you need to study on pro bodybuilders, not novices. This seems obvious, but isn’t easy to accomplish. Imagine trying to find a statistically relevant number of said pro bodybuilders and trying to convince them to jump onto an entirely new training program that isn’t even optimized for them. Especially in the natural bodybuilding scene, those guys and girls often spend decades to find out what is best for them, no way they would risk anything, especially if a skinny guy in a lab coat tells them to do stuff they don’t like to do. By the way, most exercise and sports scientists are very serious about training and are competitive strength athletes themselves, so they aren’t really as skinny as some would expect.
Because of this, most exercise science is conducted on easy to find test subjects: best case are sports students, but most likely are average Joes who never touched a barbell before. This likely explains the lack of science consensus, because almost every training protocol works with beginners. Even if you take average Joes by the way, it is still not easy to reach the statistical relevant sample size as adherence to the study protocol (let’s say: training 5 time a week with a certain program, always to failure) is also an issue.
As we stated above, getting proper test subjects, and getting enought of them, and getting enough funding to pay for everything, is sometimes almost impossible, observational studies are quite popular. They observe a large population over a long time frame and then try to find differences. Most of the time this “observation” is simply done by questionnaires that are sometimes filled out in retrospect (like “How much meat did you eat this week?”).
Those studies are completely useless for us! Trying to interpret anything is difficult at least or outright stupid most of the time:
- Correlation does not imply causation! This logical fallacy leads to numerous problems when trying to conclude from correlation (read this). Just because we observe two variables who behave similar, does not mean that they are linked together. This is the main problem of observational studies, as all you can find in this kind of study, are correlations between variables. Those correlations need to be further investigated by additional science before we can conclude anything. Sadly this is often skipped (again, money!) and especially mainstream media likes to butcher the results of observational studies in favour of cheap headlines, giving birth to fitness myths like “red meat is evil!”.
- People are incredibly bad at recalling what they did in the past, especially when it comes to day-to-day tasks like nutrition. Filling out surveys is always prone to a high amount of incorrect data. Sure, we have a huge sample size to correct for some of those mistakes, but it is still an important confounding factor.
- People lie. Most people have problems telling the truth to loved ones or even trusted coaches, can you imagine how inaccurate anonymous surveys are?
All in all observational studies have big conceptual limitations and should be only used by researchers to find interesting interactions that need further investigation.
The holy grail of scientific publications are the so-called meta-analysis and reviews, where a big amount of publications on a certain topic are reviewed trying to give an overview on a certain topic. Even if this kind of scientific publications are highly valuable, some of them could fall into the trap of cherry picking (as they usually do not include ALL studies, but a subset that is considered appropriate). This is a term that refers to the practice of picking studies that show what you personally want to show. Science isn’t easy and therefore mistakes happen and “wrong” studies get published despite their flaws passing peer reviewing (because reviewer are also just humans). When looking for a scientific answer, one must look at the entire research field to come to a meaningful conclusion, not just single studies.
Cherry picking can also be a neferious practice that leverages the fact that it is difficult to keep an entire research field in perspective and is often used by gurus, influencer, lobbyists and their ilk to sell products to the masses. It allows them to misuse the term “scientifically proven” to make profit. Usually you can see through such schemes if you ask yourself the question: “Is this too good to be true?”.
Science is still great!
There is no doubt, that scientific research is a very important and a valuable tool. It often confirms what the most experienced of us have already been observing practicing their job as a coach or training for decades. But all the obstacles mentioned above (and there are even more) make it obvious that we can’t ignore what we see “in the trenches” when working with clients. People are different and respond differently and we need to account for that. Dogmatic views in the name of science ignore the limitations of research and should be approached with caution, as everything dogmatic should be.
Given that, science is still great! We need science to progress in our methods and gain a better understanding of basic mechanisms. Science helps us to elimiate outright lies and see through scams. Science keeps us moving forward, but isn’t everything in a coaching context. Because there is no science about YOU personally!