23 Comments
User's avatar
Darby Saxbe's avatar

Great post. Earmarking funding for pre-registrations, as well as additional funding for replications, and requiring open data - these are all excellent and highly feasible ideas. I've wondered why NIH and NSF don't already do this.

Askewnaut's avatar

if we threw away all social science research, we would lose absolutely nothing. all of it could disappear tomorrow with zero effect. i take that back; intuition would likely lead to BETTER outcomes than applied social science. curious, i asked chat gtp what the most important findings of social science have been. i was not disappointed (thats to say, i was disappointed, but not surprised.) it spit out some of the most obvious facts of human existence that none with a few days of life on earth and below average iq could not immediately tell you - prospect theory: people violate rational choice assumptions. stanford replication: delayed gratification leads to better life outcomes. broken windows: visible disorder contributes to crime. coleman report: upbringing matters. public choice theory: politicians respond to incentives. (LMAO.) minnesota twins: psychological traits are heritable.

anyone not severely mentally retarded could tell you all of this without thinking for 10 seconds.

99.9% of "social science" is political and ideology propaganda.

SeeC's avatar

Yep pretty much.

While it’s somewhat interesting to have a method to « prove » that most of it is nonsense or irrelevant this is still quite a waste of time.

You won’t convince the people who refuse to do proper science to behave by a rational demonstration.

In any case it all boils to letting in the women in academia, for which most of the « social science » has been created. You can’t really fix anything until they are told to fuck off.

Askewnaut's avatar

there are a lot of potential "root causes" one could point to for societal and even civilizational decline. but if one had to choose just one ground-level. tangible cause, it would probably be "womens lib" / feminism. catastrophic at a civilizational level.

SeeC's avatar

Pretty much. But feminism is downstream of female behavior, so we were fucked the moment they got too much power.

Some women are pretty decent at operating in a more rational way (depending on settings) but they are such a tiny minority that it’s not worth it.

I think it’s not the first time that society got in trouble for giving women too much power, I believe the Bible and Christianity was basically created to deal with this problem (of course it also has some rules for men’s behavior but it all starts with the necessity of women’s submission).

Askewnaut's avatar

right. and interestingly, pretty much every single religion and cultural tradition has learned and taught this - the role of women is supportive. every civilization on earth. and i happen to be christian, but leaving that completely aside, also tens of thousands of years of basic science evolutionary theory teaches the exact same thing. for VERY obvious reasons, the womans role is primarily nurturing and raising children and keeping the home. they have been honed by thousands of years of evolution to perfect this role, and men, their role. as "scientific" as libs claim to be, theyre willfully blind to this, or they just flat lie. the feminization of society has been an abject failure. specifically white women, the most coddled, comfortable, safe, living in opulent comfort and luxury group of humans in the history of the planet, are now like, "oh hey. thanks for giving us literal everything, now get off your throne (that we built), we deserve to sit there. you OWE us that." like, WHAT?!?! and weak men are like, "oh. yeah. sorry. sorry for literally dying by the millions to give you all this. here ya go."

Ross Denton's avatar

I read this in full and I’m not clear on how the replication was done, which undermines the credibility for me. I also have no idea how it handles more qualitative papers, or papers with niche audiences.

Also I perceive an anti-education feel which appeared to persist in the face of relatively good replicatability.

Thom Scott-Phillips's avatar

Great piece.

One observation / question: You rightly identify that much social science is atheoretical — but then in the "What To Do" section there are no recommendations specifically about theory. What can be done?

Two related points:

- There are essentially zero people with "theoretical psychology" as their primary job description. To the extent that people do theory as an goal in itself, it is almost always alongside empirical work. In contrast, in the older sciences theory often stands on its own two feet.

- Many social scientists seem to view deep thought about the nature of things as philosophy, and as such not part of their discipline. But philosophy has its own problems, one of which is a dearth of engagement with what findings in social science might have to say. So there is a gap here, and few if any career incentives to bridge it.

Kristine Sunday's avatar

Are you using quant replication logic to critique qual social science research? That makes no sense to me. Replication isn’t the standard for qualitative work.

Georg Elsner's avatar

Any article which begins with an unironic oxymoron, is not worth the time to read it.

There is no science in anything 'social'.

blake harper's avatar

I'm sympathetic to this view, but I'm curious where you came by it.

Mohan's avatar

The main paper I’m working on is a registered report. It’s been an uphill struggle. There are only two journals in my field that accept registered reports at all. While I have a revise and resubmit for phase 1, back-and-forth with the referees is taking so long that I would’ve had a normal paper out by now.

I’m also unconvinced registered reports will fix everything in my field, where the critical data sets are large pre-existing ones. For my first paper on a given data set, I can say I’ve never accessed the data set - that’s at least a clear cut assertion, although it requires trust. For all subsequent papers, I have to say that I have access to the data set but have not run the specific analysis in question. That is so much greyer… what counts as “the same analysis”?

I’m grappling with this question right now as I’m about to access a new data set and I want to do some exploratory work – I feel like I should keep a log of everything I try…

Thom Scott-Phillips's avatar

What is the "same" analysis is a good question.

Your experience with registered reports is interesting. I am (sad to say) not surprised. Can you say a bit more? My guess is that referees disagree with each other and/or with you about (a) whether particular designs address the question, and (b) whether the question is coherent in the first place. I am interested to hear if that guess matches your experience.

Mohan's avatar
Dec 8Edited

I'm running a registered report to replicate some particularly highly-cited papers that turn out to have major, major technical issues that mean the results are worthless. The back-and-forth is with one reviewer, who is largely making reasonable points, albeit initially not in enough detail for me to understand what they were actually requesting. But having two stages of back-and-forth just makes everything twice as slow as for a normal paper.

Not saying more to avoid doxxing myself, but happy to give concrete details by DM if you like.

Max Clark's avatar

An oversight committee formed through sortition, which is something scientists should embrace, would be a stepping stone to a systemic solution.

Thom Scott-Phillips's avatar

Yes! I like that. The sortition is key.

Manoel Galdino's avatar

Amazing. Thanks for the hard work and this post summarizing what you learned. I have a few questions, perhaps you can answer ate least one. What about papers with qualitative methodologies? Hard to even define replication, no? Were they included in the sample? What about papers with a formal model only, not uncommon in economics (and a bit in polisci). This could help develop fields who lack theory. Were they included in the sample?

blake harper's avatar

This is why I love Substack

Richard Careaga's avatar

You had me at n = 23

TheIdealHuman's avatar

Sorry haven't had time to read yet but what is the metareplication rate?

Evidence Matters's avatar

Education is strong because the economists got traction in the top ed schools and policy schools, and because the Institute of Education Sciences, initiated in 2002 by George W. Bush with initial leadership by Russ Whitehurst, required that funded studies have well-powered causal designs, especially RCTs or regression discontinuities, with occasional tolerance for dif-in-dif.

j.'s avatar

Loved reading this as a PhD student. Do you have any similar analyses for areas like STEM and medicine?