The Four Horsemen of the Research Apocalypse
and how to counteract them to foster a healthier inquiry ecosystem
Between 2019 and 2023, Pew found that the percentage of people who distrust scientists doubled from 13% to 27%, while the people who trust scientists “a great deal” fell by a third. Just these past weeks, the National Science Foundation temporarily froze grants as scientific research has become increasingly politicized.
I’ve worked over a decade in research, and I consider it a true vocation. Research is my passion, and I’m deeply grateful that I get to participate in the industry. And yet, the criticisms and concerns that people are voicing about research often seem to be grounded in at least some sense of truth.
Part of why I’ve decided to prioritize this “year of writing” project is because I believe writing helps me clarify my view of the world, which in turn helps me act in addressing its deepest challenges. And with research, I think the loss of trust has been driven by four distinct challenges.
By understanding the rise of each challenge (or “horseman” in alarmist religious metaphor), we can develop a plan to counteract and balance its destructive influence. This will be a quick overview, and I might come back to these points later, but I feel like mapping this challenge will help guide future newsletters.
Now, let’s profile some eschatological equestrians.
Horseman #1: Elitism 🔌
Knowledge Is Power; Power Corrupts.
The power of research to change the world is indeed one of the aspects that draws many of us practitioners to it. And indeed the impact of research and “knowledge work” is something truly good about it.
But the dynamics of power are complicated. And the power of research can pose a threat to other power centers, leading those competitors to attempt to absorb research as a capability and discredit alternative models. Whether it’s the academy or political organizations or “the media”, informational institutions benefit from creating monopolies on Truth production.
Though I’m sure others have theorized this before me, I imagine that many religious belief systems were initially rooted in simple scientific truths. Using the positions of the stars to predict the rainy season, for example. Or having an understanding of how humans process traumatic events over time. And with time, those with access to insight regarding nature’s processes will establish credibility within that domain. And for those that are enterprising, they can position themselves as experts on broader topics -- moral frameworks, or the origins of the universe, or maybe a cosmically ordained hierarchy that coincidentally privileges their own social group.
When truth-seeking institutions become co-opted by a single group, it is simply a matter of time until truth-seeking becomes secondary to power-perpetuation.
How To Balance: Decentralization
While there is always an appeal to centralization -- pooling of resources, strength in numbers -- a healthy research ecosystem requires some decentralization. Individuals should be able to ask questions and explore the world and its principles. If high quality research systems are inaccessible to the broad population, then the broad population will substitute in lower quality systems.
People need to not only be able to access published research, but they also need access to the tools that allow for the production of new scientific knowledge.
Horseman #2: Profiteering 💰
R-O-Why?
I’ve done a lot of advertising research in my career, and as a result I’ve interacted with many advertising researchers. Most advertising researchers I know are fans of the social sciences, the study of media or the human condition broadly; it’s very rare that advertising itself is the draw to the work.
And yet, there is so much advertising research! There are many more ad researchers out there than human condition researchers. (A quick LinkedIn title search found 633 times more of the former than the latter.) How do so many people end up studying ads, a thing most people claim to hate?
It’s the economics of it. Having sold various types of studies, I can say that ad research is the most sellable research I’ve ever come across. Either you are selling to a publisher, who is using the research to increase dollars spent on their platform or program, or you are selling to an advertiser, who is using your coefficients to optimize their advertising and increase their own earnings. People are giving you money for work that converts to more money.
To be clear, I do not believe advertising is inherently evil. Though an excess of advertising (as I’ve said before) is one of many forces contributing to bad vibes and modern malaise, advertising in moderation serves an important communication function. People operate with “unknown unknowns”, and advertising allows people to find out about products; high quality advertising research helps get the right message to the right person.
Other social science research areas that are easily monetizable include the work of financial analysts and political polling. Similarly, these areas can serve real human needs, but it is not comprehensive. And without a holistic research ecosystem, society maintains dangerous blind spots.
How To Balance: Humanism
We must continue to ask deep questions that are important in themselves, even if they are not monetizable.
People have so many questions -- about what to read, where to go on vacation, how to make meaning through life’s difficulties. And while there are ways people can seek solutions to these questions without engaging in social science research, these are also questions where social science research is well suited to provide guidance.
Ultimately, a research system that is guided solely by economics is not optimized for truth-production. It is optimized to produce truth-adjacent artifacts. PowerPoint slides with graphs going up-and-to-the-right can be commodified. Truth, though, cannot be. And when we try to commodify it, we risk losing connection to it entirely.
Horseman #3: Voyeurism 🔭
We Used To Talk…
Some of the West’s oldest written works exploring the bounds of human experience are Plato’s dialogues. And deeply textual traditions have long been critical to the passing on of wisdom to societies and civilizations globally. For millennia, evolutions in communication technology -- the alphabet, woodcuts, the printing press, the radio, the television -- preserved the centrality of text.
And then, we had the computing revolution. Everything was digitized, distilled to digits. And by boiling down human interactions to binary frameworks of doing this or that, it enabled a kind of behavioral research revolution. Every time we interact with a computer or smartphone, we are expanding our behavioral profile that is accessible to those who develop the software we use.
And this kind of digital behavioral data revolution was really wonderful in terms of expanding what’s possible in research! Researchers could examine revealed true preferences, which often can conflict with stated desires. As with market-led research, I do not think behavioral research is inherently bad. I actually think it’s quite amazing what can be done today at scale, and I look forward to seeing this kind of research proliferate.
But! There’s an ideology in research today that large behavioral datasets make human-to-human research designs unnecessary. And I think that not only threatens research as a truth-producing function, but it also makes it dangerously dehumanizing as a practice. Humans have “revealed preferences” for clickbait headlines that validate our worldviews, for staying up too late doom scrolling, for eating themselves into chronic health conditions. If you never talk to the people you study, you risk completely misunderstanding who they are as human beings.
How To Balance: Conversations
A better research model centers actual human feedback, ideally in an open-ended setting. We should compensate people for their time and input. We should ask for people’s opinions and hopes and fears, not just their internet cookies.
Perhaps there will be some cases where expert insight results in product choices that don’t directly map to people’s preferences, but at least that can be methodically considered.
Horseman #4: Bubbles 🫧
A World Where Men Bite Dogs.
We talk about algorithms in news as if they are an internet age construction, but they are not new.
Here’s an old one:
articles = sorted(articles, key=lambda x: x['bleeds'], reverse=True)
That is to say, “if it bleeds, it leads”. In the days of broadcast television, tales of violence were often told at the top of the hour, teased by an news anchor often before the credits of a prime time sitcom had finished their roll.
Let’s do another classic:
def check_if_bite_is_news(biter,victim):
if biter==”dog” and victim==”man”:
return False
if biter==”man” and victim==”dog”:
return True
This is the old adage that “a dog biting a man isn’t news; a man biting a dog is news”. Essentially, that newsworthiness is a function of unusualness. These are ways that editorial teams would create heuristics to identify where to invest newsroom resources and also to help with prioritizing limited time within a broadcast or scarce space within a broadsheet.
These classic news algorithms were designed to drive attention and engagement, even before such metrics were so readily and rapidly quantifiable. And again, this is a smart impulse. Everyone wants their writing to be read. I want my writing to be read. But when we all use these algorithms, and when we train machines to use these algorithms at scale, we end up with a funhouse mirror view of the world.
Communications professor George Gerbner proposed an idea of “Mean World Syndrome”, based on survey research he did finding that higher levels of television viewership were correlated with perceiving people as more violent and selfish. These distortions were true even when the “news algorithms” were being run by humans; now we have billions of personalized algorithmic feeds operating opaquely to maintain engagement, so our sense of the world is likely even more warped than before.
If engagement-optimized news products are the only lens through which you view the world, then all you will see are the newsworthy exceptions to the established rules. You will have little familiarity with mundane normalcy that is much of human experience. You will have an upside-down perspective that overemphasizes moments of tragedy and crisis and drama.
You’ll think you live in a world where men bite dogs.
How To Balance: Randomness
Randomness is an algorithm too, actually. It just so happens to be a golden oldie of a research practice; random sampling is the foundation of representative research. If you want a clear picture of the human experience in this moment, then taking a happenstance cross-section is the way to go.
It doesn’t have to be the only way you look at the world. Finding curious pockets of novelty is entertaining, and there’s nothing wrong with seeking entertainment.
But if your motivation is to understand the world to better feel prepared to handle the challenges ahead, then you will be better served by a quality research design than by any sort of algorithmic feed and engagement-driven news product where truth-production is not the primary objective.
How To Prevent (Delay?) The Research Apocalypse
With these specific challenges in mind, I wanted to create a space for research that could balance these potentially destructive forces at play within the research ecosystem.
It is for these reasons that I launched Mic Check Media, a “carpooling for survey research” platform where I run a survey every morning and anyone can add their own questions to it. I have explicitly designed it to confront these “four horsemen”, using the counteracting forces I identified:
Decentralization: Anyone can contribute, not just those with institutional backing.
Humanism: Studies are low-cost enough to encourage non-commercializable inquiry.
Conversations: We ask for people’s longform opinions, not just their clickstreams and cookies.
Randomness: All studies use a classic sampling approach for an honest cross-section of human experience.
I’m starting out with verticals collecting survey data to help people (1) find media they will enjoy, (2) converse about policy and (3) feel better about life. I have other verticals in mind that I plan to launch later this year.
If any of these points have resonated with you, it’d be amazing to have you join Mic Check and help fund our research. You can use the code TESTING123
for a 50% “co-founders discount” on any of our three levels of membership. Further, if you know anyone who conducts research and might be interested in joining our data co-op, please forward this along! Any friend of yours is a friend of mine!
I’ll continue to update this community on Mic Check as the site and dataset grows, focusing on particular datasets that I’m building out. I strongly believe that research is a tool that can make anybody’s life better, and I believe Mic Check can be a humanistic research platform that helps regular folks activate social science data to enable their best lives!