//Analysing science media stories: our Top Tips
Posted 16/06/2020 1:08pm
How do we make sense of conflicting and contradictory studies? Scientific Digital Communications Editor Arran Frood reveals what to look out for when reading about research in the media.
From coronavirus predictions to the health effects of coffee, the media is full of stories brimming with science, analysis and advice. But reports of some studies often miss the mark – ever read a story telling you that red wine or chocolate was good for you, only to see the exact opposite headline a few days later?
The same goes for studies about the potential harms of Next Generation Products (NGPs) like vapes. One headline reports they’re “95% safer than smoking”, another claims vapers are damaging their lungs or blood vessels. How does anybody – including regulators, public health bodies and, perhaps most importantly, adult smokers and NGP users themselves – make sense of such a seemingly shifting landscape?
On this site we run a regular column in Our Thoughts called ‘Behind the Headlines’, which delves into the science in the studies that make up these seemingly contradictory headlines.
This article, though, is about sharing some of the things we look for when we’re critiquing a scientific study. We think this is incredibly important, because misconceptions or negative media coverage can undermine NGPs as viable alternatives to conventional cigarettes for adult smokers contemplating making the transition.
For example, in some US states buying tobacco cigarettes and cannabis is legal, but flavoured e-liquids are prohibited, and vaping shops have been shut down. If this is based on limited and incomplete science, or short-sighted reporting on that research, it impacts adult smokers’ choices and potentially affects their health for all the wrong reasons.
What we look for when we analyse studies
- All scientiﬁc studies have their limitations. Ask any scientist, and you can rarely get the perfect data for a study. This holds true for any piece of research, regardless of whether the results are seemingly ‘pro’ or ‘anti’ the subject being tested. Data isn’t always reliable; ‘fuzzy’ data can hide real trends while flagging false ones.
- Results always need to be put into context. The best way to compare the relative risks of vapes and conventional cigarettes would be to compare health of: 1) vapers, with 2) adult smokers who would otherwise continue to smoke, and 3) non-smokers. Unfortunately, vapes haven’t been around long enough to assess the long-term health impacts.
- “The dose makes the poison”. If a toxic chemical has been detected in vape aerosol, but at a concentration x1000 lower than in cigarette smoke, this needs to be made clear. Simply detecting its presence is not helpful. Acrylamide, a cancer-causing compound, is detectable in bread, coffee, biscuits and many everyday meals. However, you’d have to consume hundreds of slices of bread per day to get anywhere near a toxic dose. These details matter in accurate reporting. The presence of minute amounts of diacetyl (illegal in the UK) in small qualities in some e-liquid samples led to the false stories about vaping products causing ‘popcorn lung’. This story continues to permeate within the media and beyond leading to misperceptions about vape products (see this explainer from Cancer Research UK for an independent view on the topic).
- Consider the bigger picture. One stand-alone study is only part of a jigsaw, and doesn’t carry as much weight as a series of studies by different scientists who all report similar ﬁndings. We always prefer to review studies in the context of other research to help build an overall picture.
- Are realistic conditions used? It’s important to note whether test conditions are realistic and resemble product use by consumers. Consideration should be given to the amount of e-liquid used, the nicotine concentration and pattern of exposure. lf conditions used are not realistic, results may not be accurate. Garbage in, garbage out, as they say.
- This goes down to the real minutiae like puff topography. The way a vaper puffs on their device can inﬂuence the release of aerosol. This ‘puff profile’ differs between people and even in the same person depending on the time of the day, their moods etc. Puff topography can also vary between different devices and batteries. We therefore take a close look at whether the products tested by others are comparable to Imperial’s myblu device.
- Small sample sizes reduce reliability of conclusions that can be drawn. Only having a few recruits, spread to thinly over too many categories, can skew results – reducing the certainty and confidence in any conclusions drawn. Small sample sizes are often used in food and diet studies, massively reducing their power: coffee good, coffee bad; red wine good, red wine bad.
Biological, or pre-clinical, studies
As with many drug discovery studies, early scientific research into Next Generation nicotine delivery products often involves subjecting human cells in a laboratory setting to e-liquids or aerosols. This is often referred to as ‘in vitro’ research. These studies have some major plusses, but also have limitations that need to be complemented with clinical studies:
- In-vitro studies are useful for comparing toxicity under highly controlled conditions. This can be extremely useful because many chemicals have already been tested in pretty much the same way using internationally recognised standardised tests. These data can be compared to form part of a standardised risk assessment.
- Vapers are not small samples of cells. Cell based studies have improved with advances in technology, such as 3D scaffolds rather than 2D petri dishes, but they still have limitations. Humans have complex metabolisms, which can be hard to replicate in cell-based research.
- Researchers often use high concentrations of e-liquid or vapour to ensure an effect is observed. If the study has used exposures that are 100x higher than humans would experience, it’s hard to draw reliable conclusions. As with food and drug in-vitro research, cell death or proliferation doesn’t mean that exactly same effect will be observed when scaled up to a ‘whole’ person.
Human, or clinical, studies
Vaping products haven’t been around long enough to generate the type of long-term epidemiological data that typically reveal health trends after decades of use. However, a growing body of clinical studies on humans, supported by case studies and survey-based data, is beginning to emerge. We look for these kinds of elements when analysing any research or stories that hit the headlines:
- Previous smoking history not being taken into consideration. Results may be affected by previous smoking history (and other good and/or bad lifestyle choices). After all, the risk of cancer and heart disease accumulated from smoking does not just disappear at the time someone switches to vaping or quits completely.
- Some people may be predisposed to disease. An individual’s genetic risk of disease is important to acknowledge, as one person may be more likely to develop a disease than another – regardless of their lifestyle choices. Unfortunately, this is hard to determine and remains an unknown in nearly all studies that don’t expressly acknowledge or search for it.
- Product use may cause transient effects. Nicotine, for example, is a stimulant so it’s likely to cause some reactions like increased heart rate. These effects are sometimes highlighted as indicators for disease. However, these reactions are often short-lived and not dissimilar to those experienced when you, for instance, drink coffee or watch a horror ﬁlm.
Human behavioural studies: surveys
We’re at the early stages of with collecting data about our NGPs, so a lot of initial data is collected from simple surveys asking people questions about how they do and don’t use them.
Surveys have various positives, such as that they are (relatively) quick and simple to carry out, inexpensive, and a good way to get general pointers for more in-depth research. The Global Drug Survey is a good example of a widespread effort that can harness enough data for scientific papers to be published on its findings, like the rise in ‘smart drug’ use in academia. But there are various pitfalls to avoid, particularly when considering the power and weight of the data:
- Study participants can misreport information. Survey data often relies on people self-reporting product use and lifestyle. Are people always honest? Not always, but this doesn’t mean that they are lying – do you remember precisely what you ate last week? People are more likely to have inaccurate recall, or to provide what they think the answer to a question ’should’ be. They may also fail to report something they’re embarrassed to admit: this is why diet studies can be so unreliable – who wants to admit to eating pizza for breakfast?
- Survey fatigue and reliability. People can get bored during a survey, especially longer ones, which can potentially result in inaccurate reporting. Questionnaires are often simple to keep people interested, which in turn leads to a lack of complex information for a detailed and reliable analysis.
- ‘Experimentation’ is not ‘regular use’. Often in surveys, when people respond that they’ve ’experimented’ or ’used a product once’ it can be recorded by ’regular use’ or in a way that does not reflect its transience. This can lead to inﬂated statistics for product use, particularly by youth. For NGP related studies, it’s also crucial that previous smoking history is fully documented when assessing so-called ’gateway’ issues.
- Biological veriﬁcation of product use. Stronger studies validate product use by physically measuring speciﬁc substances in the body (called biomarkers) found in saliva, blood or urine. We look for this as a good indication for product use; it’s a lot more reliable than relying on self-reporting.
- Surveys are only representative of the time they are taken. Surveys often collect data at a single point in time. An individual moving from smoking to an NGP like a vape undergoes a dynamic, unique journey. This isn’t evident from snapshot surveys.
- Causation claims cannot be made from case studies. Case studies are single reports of incidence of a disease and aren’t strong evidence on their own. By definition, a single data point cannot lead to conclusions regarding causality – one individual is not representative of a population of people. Medical journals like to publish individual case studies on what happened to a certain patient, but they are carefully framed as stories, and not as wide-scale experiments to be replicated.
Don’t jump to conclusions
Science is a process. There is often no definitive answer at any one time, and research often moves in a series of incremental steps, working consecutively – and concurrently – with other studies. It can take many years for a consensus to emerge. There is rarely anything like a true ‘breakthrough’ that you might see heralded in the mainstream press – rather, a huge amount of previous work over many years has resulted in said advancement.
For instance, it looked as if the malaria drug hydroxychloroquine might have some efficacy in treating covid-19, until further studies revealed it either had no effect or could even be harmful. In another example, it took decades for scientists to widely agree that climate change is caused by human activity (and a very small minority still disagree). The effectiveness of many drugs and whether certain foods like coffee or chocolate are potentially beneficial or harmful is also repeatedly questioned.
It will likely take a great deal more research before we have a clearer picture, and wide consensus, about the harm reduction potential of our NGP portfolio. Until then, our commitment remains firm and our research continues apace. We’ll also continue to address gaps in the scientific literature while also calling out misleading science in the Behind the Headlines series whenever we believe it appears.