Healthy User Bias 101: Why Vaccinated vs Unvaccinated Statistics May Be Actively Misleading
If a study or statistics are cited to "prove" a potentially risky treatment lowers one's risk of death or disability, be skeptical. Be very skeptical.
Comparisons of death rates between populations that recive or do not recieve a treatment are often cited to “prove” that medical treatments such as COVID vaccines save lives. However, comparisons between vaccinated and unvaccinated populations that are not controlled for prior baseline health status can be very misleading.
Typically, vaccines are not administered to people who have very serious life-threatening conditions, a statistical phenomenon named “healthy vaccinee bias” or, in a more general sense, “healthy user bias”.
Consider the following hypothetical scenario regarding the death rates of populations who did or did not get a treatment, such as one that is deemed risky to administer to people who are in very feeble health:
Group A, received no treatment: 100,000 person-years, 110 deaths
Group B, received the treatment: 100,000 person-years, 90 deaths
Looks like Group B did about 18% better, right? Not so fast…
Say Group A included 10,000 individuals with very serious prior health conditions, according to certain defined criteria, 100 of whom died during the analyzed period, but only 1,000 of Group B were in comparably unhealthy condition, 60 of whom died during the analyzed period. So then you have the following breakdown:
Among people with very serious prior conditions:
Groups A: 1,000 deaths per 100K person-years (10,000 people, 100 deaths)
Groups B: 6,000 deaths per 100K person-years (1,000 people, 60 deaths) = Group A death rate X 6
Among healthier people without such serious prior conditions:
Groups A: 11.11 deaths per 100K person-years (90,000 people, 10 deaths)
Groups B: 30.30 deaths per 100K person-years (99,000 people, 30 deaths) = Group A rate X 2.73
So, while the entire population statistics indicate better outcomes for group B, a closer look suggests several times better outcomes for the Group A cohorts.
That is why it is very important to analyze cohorts of comparable characteristics prior to administration of the treatment in making assessments of safety and efficacy of the treatment.
In data analysis, a typical indicator of healthy user bias among populations of recipients for a treatment such as a vaccine that can cause adverse reactions, but does not offer immediate protection (typically regarded as 2 weeks for vaccine efficacy) is a below normal death rate among the recipients very soon after administration. Especially if the death rate increases over time to above normal levels.
Though sometimes the risk a treatment poses can be so great it statistically overwhelms healthy user bias, and an immediate spike of adverse events or even deaths is evident after administration. In such cases, that is a very serious safety signal.
One such example is the “clustering” of SIDS deaths soon after vaccination appointments in babies.
But if a study shows a lower rate of adverse events immediately following a potentially hazardous treatment, that is a sure-fire “tell” for findings that are tainted by healthy user bias, and red flag for “designer research,” which is specifically crafted to produce a predetermined result. Even more so for death rates, because serious adverse reactions could result in medical care or incapacitation that may actually reduce one's probability of dying while under intensive care.
So, when analyzing such data, pay mind to the controls, especially initial conditions. But also bear in mind the results of controlled studies can be “baked” with exclusions or misleading data reporting, such as exclusion of prople who left the COVID shot clinical trials due to adverse reactions, and inclusion of a death of an individual who went “off protocol” and got a Moderna shot “placebo group” of the Prizer clinical trial.
https://ijvtpr.com/index.php/IJVTPR/article/view/86/224
Postscript: The purpose of this article is to illustrate this very common phenomenon in as clear and striking a manner as possible. I welcome any suggestions and input, especially if you spot any errors. I've been tripped up byt the old “from/form” typo, amany a time, and have left some typos that occurred naturally oin this very paragraph.
With much gratitude to the International Journal of Vaccine Theory, Practice, and Research for ethical scientific research and the DailyClout Pfizer/BioNTech Documents Investigations Team for their extensive work.
Thank you for the post.
It doesn't help that vaccinated individuals were being counted as unvaccinated when they died or got sick.
Another potential approach.
If the treatment of a large group results in an overall improvement, or a tragic loss of life, then it should be possible to identify through a comparison to relevant historical data for the group.
In your example, if the two groups of vaccinated and unvaccinated were joined back together, eliminating the collider bias, and compared with the general performance beforehand, then it would be possible to see the downward trend in health.
A recent example of this is the increase in all cause mortality which coincided with the rollouts.
Raw death numbers are hard to sweep under the rug.