1 registered members (Russ),
1,231
guests, and 30
spiders. |
Key:
Admin,
Global Mod,
Mod
|
|
S |
M |
T |
W |
T |
F |
S |
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
|
|
Only The Best Herbs!
Your best source of world-class herbal information! More... |
#1 Book We've Found!
"Silver" fillings, mercury detox, & much more. More... |
For Mercury Detox
Prevent mercury reabsorption in the colon during detox. More... |
Softcover & Kindle
Excellent resource for mercury detox. More... |
For Mercury Chelation
For calcium chelation and heart health. More... |
Must for Every Parent
The most complete vaccine info on the planet. More... |
Finally.
Relief! More... |
Dr. Sherri Tenpenny
Get the info you need to protect yourself. More... |
What everyone's talking about!
Safe, powerful, timely! More... |
There is a difference!
A powerful brain antioxidant for use during Hg detox. More... |
This changed my life!
This book convinced me remove my fillings. More... |
This is what we use!
The only multi where you feel the difference. More... |
Hair Tests Explained!
Discover hidden toxicities, easily. More... |
Have Racing Thoughts?
Many use GABA for anxiety and better sleep. More... |
Help Them!
Natural health for pets. More... |
The Bible We Use!
King James with study notes by Bullinger. More... |
The Bible We Use!
King James with study notes by Bullinger. More... |
Food Additives
Protect your family from toxic food! More... |
|
|
|
|
Show me your evidence! Testing of drugs.
#34439
04/02/08 06:18 AM
04/02/08 06:18 AM
|
OP
Master Elite Member
|
Joined: Jul 2007
Posts: 1,597
London, UK
|
|
I read this and found it pretty interesting…long, but made me chuckle at the end. It all about clinical drug trials.
quote
Show me your evidence!
Oh, how I love that question. It normally comes from pious know-it-alls who have absolutely no intention of letting evidence cloud their bias but who want to think they are superior because they are "scientific" or practice "scientific medicine".
I wonder whether they have thought much about the "scientifically proven" drug treatment for depression called antidepressants which was "proven" (by statistical analysis of clinical trials) to be little better than placebo last week.
How could it be that after all the rigorous scientific double-blinded, placebo-controlled trials demanded by regulators, that these drugs are later found to be useless?
Well, there are quite a few very good reasons but they all reach the same conclusion: Scientific medical experiments will never produce advances in medicine and they will never be able to decide the best treatment for your problem.
So if you're ready to expose the false god of science in medicine, grab yourself a comfy chair and a cocoa (made with rice-milk, of course) and let me tear down this holy cow... Most of us are familiar with a high-school approach to experimentation. We arrive at a hypothesis and then test the hypothesis. If we want to know what happens to a certain rock when we hit it with a hammer, we could take two identical rocks and hit one with a hammer and observe the difference between the one we hit and the one we didn't hit.
We can be fairly certain that the blow of the hammer caused the destruction of the rock because we still have the control rock to compare it with.
Such comparison is not possible in a medical experiment. Biological diversity means that no two people are the same. Physically, chemically and emotionally, we are unique and any intervention makes us different to who we were before. It is therefore impossible to isolate the effects of the experiment.
Being good scientists, we may try to repeat the rock experiment 3, 4, 10 or 20 times to increase "certainty" about what happens when we strike a rock with a hammer. Unfortunately, by repeating the experiment, we have changed the question.
What we wanted to know was what happened to "a rock" when we hit it, now we are looking at what happens to "rocks" when we hit them with a hammer. The difference is, now we are more concerned with the hammer than the rock.
Imagine for a minute, that you are the rock. Do you really want to know what happens to "rocks" when they are hit with a hammer, or do you want to know about you. If the medical procedure is the hammer, what you want to know is what is going to happen to you when you have the procedure.
Unless you really want to sign up to a massive experiment, your outcome is the only one that is valuable to you. Scientists don't look at it that way. Scientists don't care about what happens to you, or any individual, they want the "power" (their word) that comes from observing repeated results.
However, since every one of their subjects is so different, they have a tough time "controlling" the experiment, something they must do to remove the enemies of science; confounding, bias and chance.
In the language of the medical experiment "controls" are the group that is left unaltered. Since you can't be your own control, the only option for a scientist is to take a group of subjects and "randomise" them into two groups, one to be the control, the other the experimental group. Once the experiment is performed, they compare the two groups and presume that the average of the difference in the outcome was due to the intervention.
In scientific terms, the lower the number of subjects in each group, the lower the "power" of the study. In this context "power" refers to the certainty that the difference in outcome was, in fact, due to the intervention, rather than chance. It has nothing to do with the effectiveness of the treatment. However, the relevance of the results to any individual in either group decreases with the number of subjects in the group.
In a clinical trial, patients are assigned into one of two (or three or four) groups. The split is supposed to be totally random so any characteristics of patients that might favour the treatment or the placebo are split randomly between the groups. This is to avoid the possibility that an experimenter might unconsciously place the sicker, shorter or better looking patients in one group to favour a certain outcome.
Each group receives either the treatment or a placebo (which can be another treatment) and then every patient is measured at the start and the end of the experiment. The difference between the two measurements, averaged across all the patients in each group, is presumed to be the effect of the treatment.
Let's imagine a clinical trial to compare "Treatment A" with "Placebo B." After the experiment, we find that Treatment A has resulted in substantial improvement for 70% of patients and Placebo B has resulted in a substantial improvement for 40% of patients. Obviously, doctors would want to use treatment A because, on average, it was more effective. However, these results tell us absolutely nothing about what happened to any individual patient in this trial.
Even armed with the results of this trial, when you go for treatment, your doctor has no way of knowing whether you will be one of the 70% for whom the drug worked or the 30% for whom it didn't. They have no way of knowing whether you might be one of the 40% who improved with the placebo.
Remember that theoretically, if each group had been given the opposite treatment, the results would have been the same. Even if the trial protocol were perfect and implemented to the letter (it never is), doctors are still guessing as to whether treatment A would benefit you or not. Sure, the odds are increased that you would get better with treatment A (presuming you have exactly the same disease the trial studied, have no risk factors that would have eliminated you from the original study and fit the age and sex profile of the study participants) but the idea that the scientific evidence gives you any certainty as to the result in your particular case is patent nonsense.
The "gold standard" of medical evidence, the clinical trial, works on groups of patients, averaging the benefits (or harm) and then determining how likely it is that the result could have been achieved by chance. An arbitrary probability is usually agreed so that if a result has greater than a one in 20 chance of happening by luck alone, then the result is not regarded as statistically significant, no matter how much benefit was delivered by the treatment.
Scientific evidence is useful if you are a government, insurance company or an organisation paying for the treatment of a population. When these organisations are paying for the disease-care of their citizens, members or subjects, they want to minimise their costs and maximise their benefits. Naturally, they will try to achieve the biggest "bang" for their "bucks" and will only pay for those treatments with the best overall result.
Now, you may be as comforted as the insurance companies would be, that you are more likely to be in the 70% group, but I am not. What if Treatment A caused major side-effects or carried a risk of liver failure or sudden death, would you not want to try the placebo first?
The 70% group may have been those that classified themselves as improved or significantly improved after one month on the treatment. Is that what you want? What happens after that? What were the results after 6 months, or a year? What happened to the children of those that took thalidomide or DPT? Questions about the longer-term effects are almost never answered in clinical trials because of the costs and complexities of long experiments.
If I were a patient, "substantial improvement" would not be enough. I would only be satisfied with a cure. Medical trials don't usually have "cure" as one of the outcome options. In fact, most medical treatments are so ineffective that, if cure was the outcome, they would have zero success.
The "scientific method" itself is only needed when a treatment is so ineffective that the only way to find out whether any difference exists between the treatment and a placebo is a clinical trial.
Let's say you take two packs of playing cards. Each card represents the average improvement of an individual patient and an ace is the good result you want to measure. If you want to know whether or not the packs contain the same number of aces and you can't take the packs apart and have a look, statistical analysis will, in effect, turn one card over from each pack and check to see if it is an ace. If one pack contains only aces while the other contains none, it won't take many comparisons to work out that there is a major difference between the two packs. You don't need a statistical analysis at all.
Contrast this with the situation of very minor differences between the packs so that there are 5 aces in one pack and 4 in another. Now you are going to need to turn over huge numbers of cards before you can be statistically certain that there is a difference. Statistical packages exist that will calculate how many cards you will need to turn over to be reasonably certain a difference exists but in general, the more times you look for aces (or the more patients in a trial), the more certain you become. This is called statistical power.
The higher the power of the trial, the less likely that any difference you found during the experiment might be a random or chance effect. Power does NOT refer to the effectiveness of the treatment.
Highly-powered trials are the gold-standard of medical evidence, which is why huge numbers of patients are involved. Such expensive and elaborate trials are only necessary because the effects (benefits or harm) of the treatment are incredibly small. Next time you see the results of a study involving thousands of patients, look at the results very carefully. As impressive as the study sounds, it is likely that the effect of treatment is small, in fact, the effectiveness of the treatment is likely to be inversely proportional to the number of patients involved. Most drugs are in this category.
Imagine a trial designed to test the effectiveness of a drug to prevent a stroke. Let's say 40 out of a thousand in the placebo group have a stroke during the trial period and 20 out of a thousand in the medication group have a stroke.
Presuming the placebo did not contain sugar or other substances that could cause a stroke, you might rightly conclude that the medication led to 50% less strokes. The headline might read, "New drug cuts blood pressure risk in half". The reality is that only 20 out of a thousand (or 2%) actually benefited from the treatment.
So the next time someone tells you they don't "believe" in alternative medicine "because it's not scientifically proven" - you can tell them that you don't "believe" in drug treatments because they are scientifically proven.
Simon King
unquote
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third it is accepted as being self-evident."
Sunshine
|
|
|
|