I would extend to to "statistics based knowledge" in general. If you don't understand the mechanism it's not real knowledge.
For example: You have a drug that 100% helps in a certain context (of what the problem is and the person's genetics plus maybe even epigenetics, and maybe even things like what they eat and what environment they are exposed to).
However, clinical trial studies don't go very deep in separating different kinds of people. We just don't know enough, we don't know how to even measure most thing, and when we do it's extremely costly. And since we don't understand the mechanism - if we did we didn't have to go through the trials - they are needed because even we have _a_ mechanism we are not sure what else the drug does in the body, or about follow-up and higher order effects - we would not know what to look for anyway. Plus, without full understanding of the mechanism it's hard to combine the new "knowledge" with other knowledge. The experiment you gained the data from is very specific and results are hard to generalize.
So the result is the drug will be a complete failure, because we are unable to tell which people would benefit.
That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).
There are people working specifically on looking closer at some "failed drugs". They have been able to find a few "miracle drugs" that way. They only help certain people (they use genetic testing) but when they do they do great. But they failed their initial clinical trials big time.
Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.
>That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).
Good point, but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives. In such scenarios, it becomes more probable than not that a given positive result is in fact false. Combine that with the base-rate fallacy (the good majority of drug trials yield null results, but this is not taken into account) and you are in really bad shape.
There was a replication study a while ago that tried to replicate IIRC 50 or so "landmark" cancer studies, and only came up with significant results in 6 cases.
Bayesian methods go a long way towards solving these issues, but there is no cure for low-power studies. They just can't tell you much, and will lead you heavily astray if you don't properly account for multiple comparisons, etc.
>Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.
Clinical studies of new drugs have uncertainty. Thus any decision-making based on results must take this uncertainty into account. You can't get away from statistics here. The bad feeling in your mouth may be from the unintuitive and usually inappropriate use of P-values and null-hypothesis statistical tests. The vast majority of researchers are actually completely mistaken on what P-values even mean. Most think that they are "the chance of a false positive" or something similar, which is completely wrong.
Bayesian methods help here, because they are the only valid way of combining past information with new information.
> but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives.
As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.
Most all knowledge is "statistics based." Anytime you are not 100% sure about something, then your knowledge about that thing is inherently probabilistic. There is absolutely no way around it.
You can't get away from statistics. You can only replace bad statistics with good. If you ignore these statistical aspects of an experiment such as multiple comparisons, then your reasoning is even worse!
You can't just "decide" to not use "statistics based knowledge" any more than you can decide that your experiment is not subject to uncertainty or error. You could say that, but that doesn't make it true.
Bayesian statistics are much more intuitive, however. Baye's theorem is actually the generalization of contrapositivity (If A implies B, then not A implies not B) to situations where we are not certain of A and B.
>As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.
You seem to be saying that nothing is gained with statistics knowledge, and attempting to use better statistics is just throwing "randomness upon randomness." If this isn't true, then you are being unclear. All of your criticisms about statistics are very vague, and do not cite any specific problems.
I gain understanding from "statistics-based knowledge." If you do not, then that is a problem you should solve by reading more about these issues.
For example: You have a drug that 100% helps in a certain context (of what the problem is and the person's genetics plus maybe even epigenetics, and maybe even things like what they eat and what environment they are exposed to).
However, clinical trial studies don't go very deep in separating different kinds of people. We just don't know enough, we don't know how to even measure most thing, and when we do it's extremely costly. And since we don't understand the mechanism - if we did we didn't have to go through the trials - they are needed because even we have _a_ mechanism we are not sure what else the drug does in the body, or about follow-up and higher order effects - we would not know what to look for anyway. Plus, without full understanding of the mechanism it's hard to combine the new "knowledge" with other knowledge. The experiment you gained the data from is very specific and results are hard to generalize.
So the result is the drug will be a complete failure, because we are unable to tell which people would benefit.
That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).
There are people working specifically on looking closer at some "failed drugs". They have been able to find a few "miracle drugs" that way. They only help certain people (they use genetic testing) but when they do they do great. But they failed their initial clinical trials big time.
Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.