January Roundup
Replication, general managers, bridging biological structure and function, and more.
I hope 2026 is off to a good start for everyone. I spent winter break with my family in Bangalore. The last time we visited home, my daughter was just 4 months old and had just started rolling over. This time, at 2.25 years, she was chatting away with everyone from her younger cousins to great grandparents. I loved watching her follow them around calling after them: akka, anna, ajji, and tata (the terms for older sister, older brother, grandmother, and grandfather in Kannada). I’m sad it was not mango season, but I made sure to have dosa abundance. The 75F weather was also an absolute treat before coming back to Wisconsin’s -20F.
In January, I published a piece on creativity in the sciences. I’ve been exploring serendipity, surprise, and how different types of research come about. I’m also interested in what it means to categorize research in different ways, whether as a two-dimensional basic-versus-applied spectrum or through Donald Stokes’ quadrants. This led me to Dean Simonton’s formal definition for creativity. In the post, I explore what it means to write an equation for creativity and offer a somewhat contradictory example to his definition.
Writing is an inherently dignified human activity by Celine Nguyen. I enjoyed reading Nguyen’s reflections on two years of maintaining a writing habit. This particularly resonated with me:
Early on, whenever I felt discouraged by my mediocre writing, I would cheer myself up like this. First, I would find a newsletter or blog I admired: stylish, well-written, distinctive in voice and approach. (And popular: they often had thousands of readers.)
Then, I would go into the newsletter’s archives and scroll down to the very first post I could find. It was always more raw, unpolished, and amateurish than the writing I was familiar with. I can’t describe how reassuring this was! I could see how people had become—through persistent and publicly-observable attempts—the writers that I knew and loved.
I do this too. Whenever I find a blogger with an interesting take on biology or biotech, I look for their first post and see how their writing and views have evolved. One of my writing resolutions for this year is to write more opinion pieces.
The golden age of vaccine development by Saloni Dattani for Works in Progress. An incredible essay detailing the key pieces that drove vaccine development from its conception to present times. The golden age of vaccine development is ahead of us! Essays like this give me the chills. Scientific progress is incredible, everything we do rests on the shoulders of giants, and there is infinitely more progress we can all contribute to. Also, I did not know this:
Jenner’s vaccine was spread by arm-to-arm transfer, a process that involved collecting fluid from cowpox (and sometimes horsepox) pustules with a lancet, then literally scratching it into the skin to produce immunity to smallpox in the first recipient. Subsequent recipients would develop their own pustules, from which fresh fluid was taken and scratched into others, continuing the chain of vaccination.
Is replication pro-progress or anti-risk? by Jordan Dworkin. Dworkin makes compelling points about how to think about the benefits of replication. I especially like his distinction between discovery and translational research. In my mind, replication is most valuable where individual studies or a small group of studies influence public policy or the general public narrative. Those are translational studies. Discovery research is a strong-link problem, and we should try to avoid treating it like a weak-link problem. Dworkin says,
Drug development is the canonical example. A paper reporting potential preclinical benefits of molecule X on cell line Y in a specialty journal may not generate substantial follow-on research or advance a broader subfield. But when that genre of paper is incorrect as often as it is correct, preclinical academic literature becomes an unreliable source of leads for biotech startups and pharmaceutical companies, and our system of knowledge translation trades efficiency for redundancy.
Framing papers as “correct” or “incorrect” is interesting to me, and perhaps not exactly what I look for in studies. This could be my discovery research bias, but I would be more interested in understanding the “why” of an unreplicated study. What can we take away, what might have caused it to not replicate, what confounding factors might have been missed? In any case, I’m no expert on what matters in replication, and here is Stuart Buck’s back and forth with Dworkin.
What I didn’t expect about being a funder by James Özden. An interesting reflection on what you don’t know before becoming a grantmaker. I wish more people wrote posts like this about their own fields. For most professions, you don’t know what it’s really like unless you actually do the thing.
Why do research institutes often look the same? In Asimov Press by Samuel Arbesman. A great point to bring into metascience discussions. Many new research organizations start with a bold vision to be different from traditional academia, but over time they adopt familiar academic structures. There are many reasons for this, but one I hadn’t considered is the tax code. Arbesman writes,
Due to my involvement in the space of non-traditional research organizations, I speak with many people who are thinking about building new institutions. A common question that I get asked is whether to go non-profit or for-profit. This decision will impact the kind of people or organizations they approach for fundraising, the regulations they will need to adhere to, and so forth, and these things should not be taken lightly. Yet, is it not odd that our tax codes have reduced the complexity in how researchers think about science? We imagine the vast and high-dimensional space of outlier research institutions, and then are forced to collapse it into these two categories because of tax implications.
I had always thought of tax implications as downstream of decisions about organizational structure, not the other way around. Also, Overedge is a great catalog of new research organizations.
Building brains on a computer by Maximilian Schons in Asimov Press. A comprehensive essay on what it will take to truly emulate the brain. This is already a condensed form of the 175-page report Schons and team have written. They outline three core capabilities needed to emulate the brain: recording brain activity, reconstructing a brain wiring diagram, and modeling the brain by combining these data. Of these, I agree that recording brain activity is the biggest bottleneck. Recording every single cell (including neurons, astrocytes, glia, and the various neurotransmitters and neuromodulators) in the brain lags behind current wiring and modeling capabilities. Schons estimates 20 to 50 years to emulate the human brain. I would guess it will be closer to the upper end.
Horizontalization in biotech by Corin Wagen. An interesting observation about the transformation in the current biotech and drug discovery scene.
horizontalization is a natural response to increasing market size and complexity. As the market became big enough that there could be “a database company” or “an ads company,” it became advantageous for these capabilities to become their own firms rather than stay part of a single monolithic ur-company.
I liked this post because it reminded me of a constant battle in neuroscience projects I’ve been involved with: to buy or to build. You can choose the tried and true National Instruments and program your data acquisition instruments to converse with your animal behavior and other recording equipment, or you can build a custom hardware and software stack meant to be more flexible. The former is more rigid but reliable; the latter is more flexible but will probably break more often and require bespoke software.
There should be ‘general managers’ for more of the world’s important problems by Nan Ransohoff. I could not agree more with this. Why don’t the world’s biggest problems, of which there are many, have champions?
In my opinion, there should be ‘general managers’—GMs—for problems like these. These are founder-types who feel personally responsible for delivering a specific outcome (vs field-building generally); hyper-competent leaders who will pull whatever levers necessary to achieve the defined outcome. Most companies wouldn’t let an important initiative go unmanned or without a ‘directly responsible individual’ — why are we OK not having GMs for even more wide-reaching problems?
The 100x research institution by Andy Hall. An excellent report on one person’s use of AI in automating their research. This is another space where I wish people wrote more concretely about how they’re incorporating AI into their workflows, especially in science work that includes the lab bench. When I ask my coworkers how they use LLMs, the most common answer is email. Most people are only using it to write emails, which is sad. I have a post planned about what it’s been like starting research in a new field and how I’ve been using LLMs to help me learn faster.
How to be less awkward by Adam Mastroianni. As always, a super insightful piece. I feel like it was written for me.
School is way worse for kids than social media by Eli Stark-Elster. Very interesting. What problems are we trying to address? What policies are we implementing? This was well said:
Simple solutions are easy to think about and apply. But many important problems aren’t simple, which means that the simple solutions are often wrong. Social media, like fear of witchcraft and immigration, is yet another all too obvious answer to a much more complicated question: how did our children become so sad? School is also not the only right answer; as always, the truth is multi-faceted. But the data shows that it is one especially important facet.
Brendan Foody on teaching AI and the future of knowledge work on CWT. I enjoyed the second half of the conversation much more than the start. One moment got me thinking about training LLMs with current experts. Tyler asked,
If you could model a much older poet — William Wordsworth, Blake, John Milton, Rilke — some of my friends say there are no truly great poets left anymore. The best poets were way back when. Is it a goal to model the older poets and figure out what they would think, and rather than having Larry Summers and Cass Sunstein come in, that you have some AI-generated model of John Milton?
Should we be trying to capture the actual GOATs as opposed to current experts? Sure, I could be an expert in, say, biomedical sciences, but would it be better to use my expertise to train a Watson or a Crick, and then use those avatars to provide feedback to the models? It would be much easier for me to evaluate whether an agent is personifying Watson than to actually be Watson. I also found this analysis of what makes great poetry by Hollis Robins to be very interesting
Beyond the endless frontier by Ben Reinhardt. I’m intrigued to see how the NSF’s Tech Labs initiative unfolds. It’s a unique opportunity to experiment with new models of funding. But as Reinhardt writes, it’s critical that the NSF does this right to prevent Tech Labs from becoming more of the same, i.e., funding to professors at universities. I especially liked Reinhardt’s emphasis on taking risks on innovative ideas with scrappy teams rather than picking teams based on strong track records:
It’s likely that this call won’t fully address the “cold-start” problem. That is, if the program selects for teams that look like they have the best chance of getting the farthest during the tech lab, it is going to heavily favor pre-existing teams that have already gotten funding in one way or another to do as much work as possible. That precondition already puts a constraint on the novel impact that the tech labs can enable because there will be a selection effect for work that already looked promising to a more traditional funding source. It’s likely out of the scope of this initial effort, but in order to maximize the value of the tech labs program, TIP should also look at funding organizations that are able to start ambitious efforts “from scratch” to get them to a place where they would be a good fit for a full tech labs proposal. Another approach to this “cold-start” problem would be to accept a few earlier-stage teams into phase 0 based on potential and then judge them on whether they can make shocking amounts of progress in a short amount of time compared to other more mature teams.
The BBN Fund by Eric Gilliam. I’m excited for this new initiative to build more BBNs. You can read about what BBNs are here. Gilliam says,
This past year, I’ve embraced the role of a ‘field strategist’ for the BBN ecosystem. In this period — Stage 1 of the modern BBN experiment — I sought to verify that there was both demand for BBNs from ARPA-like funders and a supply of top researchers eager to found BBNs. Thanks to the UK’s Advanced Research + Invention Agency (ARIA), both have now been resoundingly verified. To provide just one data point in support: according to ARIA’s most recent fiscal year data, a small set of scrappy BBNs won more in ARIA funding during the year than every lab at the University of Cambridge combined. Stage 1 of the modern BBN experiment is now complete.
It is now time for Stage 2 of the experiment: building a “Convergent Research for BBNs.” The BBN Fund’s objective will be simple: seed a modern ecosystem of BBNs and work to maximize their overall technical ambition. If successful, we will forge a new pathway for today’s best applied, ambitious researchers to pursue ambitious R&D agendas — as Convergent Research has done with FROs.
Building science when execution is the bottleneck by Henry Lee. Lee argues for investing in automating scientific instrumentation since the scientific enterprise is bottlenecked by how fast humans can run experiments.
Experiments must be set up, calibrated, monitored, documented, cleaned up, and repeated. Errors propagate, context is lost, and human availability becomes a bottleneck. The distance between thinking and doing remains wide.
Lee further argues that general-purpose lab instrumentation does not scale for the flexibility needed to carry out various experiments, and instead we should invest in building highly specialized equipment. I largely agree with this argument given the current state of our technological capabilities. But is there a world, perhaps sci-fi for now, where a general-purpose lab robot (perhaps my robot clone) could execute experiments on highly specialized autonomous instruments?
The ground truth: crystal symmetry by Lewis Martin. A great post highlighting the importance of verifying data quality as datasets grow ever larger. Trained human experts are still needed to validate data quality. This reminds me of the post about AI radiologists and keeping humans in the loop.
Just one scientific publication this time, because it’s a big one:
A cool methods paper, Correlative voltage imaging and cryo-electron tomography bridge neuronal activity and molecular structure by Jung et al., combines voltage imaging to measure neuronal electrophysiological patterns with cryo-electron tomography to visualize molecular structures within the same neurons. Their goal was to link the internal molecular architecture of neurons with their electrical firing patterns, i.e., correlating structure with function.
The experimental approach is elegant: researchers grow neurons on tiny metal grids that fit under both optical and electron microscopes. First, they electrically stimulate the neurons while recording their responses with a fluorescent voltage dye. This allows them to group neurons into three categories based on how strongly they respond to stimulation. Immediately after imaging (within 15 minutes), they flash-freeze the grids to preserve the cells in their natural state. They then use cryo-electron tomography to capture nanometer-scale images of structures inside the cell body. Finally, they analyze the distribution and structural states of ribosomes (the cell’s protein-making machines) in each neuron.
While I think the development of this new method is excellent and correlating live function measurements with molecular structure is very cool, I have a few questions:
Their reasons for studying ribosomes, instead of molecules that actually drive electrical activity, don’t seem grounded in biology. Voltage-gated sodium and potassium channels directly control neuronal firing, but they’re too small and sparse to reliably visualize with current cryo-ET techniques. Ribosomes are large, abundant, and easier to image at high resolution; this seems like a “looking under the streetlight” problem. The connection between electrical responsiveness and ribosome structural states is indirect at best, requiring multiple inferential leaps (activity to translation changes to ribosome states, which occur over hours, not minutes). The paper demonstrates technical feasibility (we can correlate imaging with cryo-ET) but picks a molecular target based on what’s technically tractable rather than what’s biologically most relevant.
They also don’t actually know what types of neurons they’re looking at. There are no measurements of neuronal maturity, cell type, or health beyond the voltage imaging. The clusters could reflect differences in cell age, damage, or identity rather than true electrical properties. Over half the neurons (55%) don’t respond to stimulation. Are these immature, damaged, or just different?
The measurement and analysis of firing patterns also seems a little strange. They measure fluorescence changes from a voltage-sensitive dye, and the signal could vary based on dye uptake, not just electrical activity. Imaging at 100 Hz (every 10 ms) is too slow to properly resolve action potentials that last 2-3 ms. Their “decay parameter” measures fluorescence amplitude at 60 ms after the peak, but action potentials are completely over by 3 ms. They might be measuring network reverberation or noise, not spike kinetics. Only 5-8 neurons per group for cryo-ET also seems too small. What does intragroup variability look like for ribosomal states? Would that account for all these differences?
Overall, I think this is a really cool technique! It’s directly measuring function and structure within the same cell populations. I’m just cautious about what conclusions we can draw from their choice of biological application.
For reference, a recent review of advances in cryo-electron tomography.
That’s it for today folks! I will have a couple bigger updates to share next month. Thanks for reading.
Cover image: Molecular Clarity—Discovering What’s Possible with Cryo-Electron Tomography. Thermo Fisher Scientific. Source.


Brilliant take on the replication debate. The strong-link/weak-link framing really changed how I think about this problem, especially for early-stage work where one breakthrough matters more than averaging across many mediocre results. When I was working in computational neuroscience, we saw similar issues where people wanetd to apply one-size-fits-all metrics.