Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of behavioral economics is based on a very shaky foundation of psychology.

For instance, the priming experiments cannot be reproduced.

> This result confirms Kahneman’s prediction that priming research is a train wreck and readers of his book “Thinking Fast and Slow” should not consider the presented studies as scientific evidence that subtle cues in their environment can have strong effects on their behavior outside their awareness.

https://replicationindex.wordpress.com/2017/02/02/reconstruc...



I was thinking about this with Edward Bernays recently. He is considered the 'father of Public Relations', and comes up in a lot of pseudo-sciency conversations, like Adam Curtis's documentary, "The Century of the Self." *

Bernays comes up a lot in counter-cultural circles, but it's always bugged me that I've never seen any validating evidence that his techniques were actually effective, beyond "he came up with the idea of 'freedom torches' and smoking went up among women."

His techniques sound interesting, and they feel like they'd be effective, but I had trouble finding any solid research that validated the idea that his techniques were effective at anything other than making himself famous.

(* Don't get me wrong – I love Curtis's films, as art. But it bugs me how they usually present a flood of information presented as fact, with little to know citation or corroboration. If they were just art, it wouldn't bug me, but a lot of people seem to swallow the films' conclusions wholesale.)


Absolutely.

It’s not limited to marketing either, management theory is full of this too.

Almost all of Frederick Winslow Taylor’s work and reputation was built on unverified case studies and anecdotes. (A great book on this is ‘The management myth’.)


The conversation is rarely about his actual techniques, but his popularization of that particular approach to communication with the public.

So really over-simplify: "If you want people to do X, don't just tell them to do X or order them or try to convince them, use psychology to understand which Y's and Z's you can tell them about that will statistically lead many people to do X".

He was just the guy who said "Hey, this emerging scientific field can be really useful in manipulating people even though it is early days!"


While reading "Thinking, Fast and Slow" I was astonished by the priming effects, and the confidence with which Kahneman presented information about priming. I became obsessed with the priming concept and began doing further research, only to find countless articles discounting priming's legitimacy. I haven't been able to pick up the book since, because of the way in which the information was presented as sure fact, when in reality the research was early and inconclusive.


I think you're not quite right with the timeline here. IIRC, when "Thinking, Fast and Slow" first came out, the research was considered pretty settled and had had lots of replications. It's only a few years afterwards that the replication crisis really hit psychology in a big way, and especially priming.

So with the benefit of hindsight, yes, he presented faulty research - but he didn't (and couldn't) have known it at the time.

(Of course, if the lesson is that the entire field should be considered skeptically, I don't think I'd disagree)


See also Kahneman's response to this: https://replicationindex.wordpress.com/2017/02/02/reconstruc...

> I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.

(Discussed on HN: https://news.ycombinator.com/item?id=15228712)


I think I have some old comment here where I tried to find anyone else who found that book to be completely unconvincing and really bad at justifying the conclusions.

People acted like I was insane.


I know that psychology is particularly devasted by the replication crises but aren't branches of research affected as well?


I think one of the problems "we" have is that bias confirmation is a hell of a drug. By "we" I mean individuals, researchers, companies, corporations, governments, everybody. When we see something that intuitively makes sense to us, especially one that confirms our biases - we tend to want to believe it, because we all believe what we think is true and so things that support what we believe must therefore be true, according to what we believe. Some word salad there, but I think that's reasonably clear.

Like you mention the replication crisis should be really leaving most of all psychology results, not only past but also present, in serious doubt -- let alone anything contingent on those past results. But again the problem is that when new research comes out that confirms our biases, we don't expend the energy to challenge it.

And it seems that many in science are more concerned with themselves than their science - something that the replication crisis provides a great deal of evidence for. People aren't putting out trash science by accident. They need to publish, and trash science is what gets published. And the replication crisis hasn't changed this. There seems to have been, at best, token efforts to try to more fully ensure the truthfulness of what's being published. So we continue to believe what we want to believe, with science increasingly falling victim to the act of starting at a conclusion and working your way backwards.


Personally, I wonder if it's not simply that psychology is the first field to investigate the issue seriously.


Medicine is being hit almost as bad. In general, fields where a lot of studies have low statistical power(usually due to noisy data and/or small sample sizes) are the hardest hit. Statistical power is the chance that a significant result is found if the effect is indeed significant (one minus the chance of a false negative). Counter-intuitively, this increases the chance that a given significant result is invalid.

Combine that with non-reporting of negative results and you basically have a huge pile of bullshit.


I would extend to to "statistics based knowledge" in general. If you don't understand the mechanism it's not real knowledge.

For example: You have a drug that 100% helps in a certain context (of what the problem is and the person's genetics plus maybe even epigenetics, and maybe even things like what they eat and what environment they are exposed to).

However, clinical trial studies don't go very deep in separating different kinds of people. We just don't know enough, we don't know how to even measure most thing, and when we do it's extremely costly. And since we don't understand the mechanism - if we did we didn't have to go through the trials - they are needed because even we have _a_ mechanism we are not sure what else the drug does in the body, or about follow-up and higher order effects - we would not know what to look for anyway. Plus, without full understanding of the mechanism it's hard to combine the new "knowledge" with other knowledge. The experiment you gained the data from is very specific and results are hard to generalize.

So the result is the drug will be a complete failure, because we are unable to tell which people would benefit.

That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).

There are people working specifically on looking closer at some "failed drugs". They have been able to find a few "miracle drugs" that way. They only help certain people (they use genetic testing) but when they do they do great. But they failed their initial clinical trials big time.

Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.


>That's always the problem: When you don't have a very good understanding of the mechanism and all the consequences you have to be lucky that the population you study is more or less the correct one. You don't even know when you got the wrong one. If the drug - but same in any other field that uses statistics - fails for 95% of people you may still have a hit for a sub-population (it's not as easy as "it's the other 5%" of course).

Good point, but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives. In such scenarios, it becomes more probable than not that a given positive result is in fact false. Combine that with the base-rate fallacy (the good majority of drug trials yield null results, but this is not taken into account) and you are in really bad shape.

There was a replication study a while ago that tried to replicate IIRC 50 or so "landmark" cancer studies, and only came up with significant results in 6 cases.

Bayesian methods go a long way towards solving these issues, but there is no cure for low-power studies. They just can't tell you much, and will lead you heavily astray if you don't properly account for multiple comparisons, etc.

>Over the years I have gotten much more skeptical of all statistics based "knowledge". Reading those studies always leaves a strange taste in my mouth. Something doesn't feel right. It does not taste like knowledge. I see the relevance given that it often is the only way to make practical progress of course.

Clinical studies of new drugs have uncertainty. Thus any decision-making based on results must take this uncertainty into account. You can't get away from statistics here. The bad feeling in your mouth may be from the unintuitive and usually inappropriate use of P-values and null-hypothesis statistical tests. The vast majority of researchers are actually completely mistaken on what P-values even mean. Most think that they are "the chance of a false positive" or something similar, which is completely wrong.

Bayesian methods help here, because they are the only valid way of combining past information with new information.


> but this kind of subgroup analysis also runs into the multiple comparisons problem. If you test all kinds of subgroups (or just do a lot of any kind of tests), chances are good that you will run into false positives.

As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.


Most all knowledge is "statistics based." Anytime you are not 100% sure about something, then your knowledge about that thing is inherently probabilistic. There is absolutely no way around it.

You can't get away from statistics. You can only replace bad statistics with good. If you ignore these statistical aspects of an experiment such as multiple comparisons, then your reasoning is even worse!

You can't just "decide" to not use "statistics based knowledge" any more than you can decide that your experiment is not subject to uncertainty or error. You could say that, but that doesn't make it true.

Bayesian statistics are much more intuitive, however. Baye's theorem is actually the generalization of contrapositivity (If A implies B, then not A implies not B) to situations where we are not certain of A and B.


Why do you re-interpret what I wrote instead of going with what I wrote? Why do you lecture me about things that I didn't write?


>As I said, the issue is statistics based "knowledge". What you just said just continues down that path, so of course it does not solve the problem, actually makes it worse because now we throw more randomness at randomness and get random matches - but still not one bit more understanding.

You seem to be saying that nothing is gained with statistics knowledge, and attempting to use better statistics is just throwing "randomness upon randomness." If this isn't true, then you are being unclear. All of your criticisms about statistics are very vague, and do not cite any specific problems.

I gain understanding from "statistics-based knowledge." If you do not, then that is a problem you should solve by reading more about these issues.


Absolutely, and we should have a lot of skepticism to conclusions from those branches of research as well.


On teh other hand, priming is a real effect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: