December Roundup
CWT favorite episodes of 2025, a number of metascience updates, and atomic resolution in cells.
Will my substack following grow exponentially? By Manil Suri. This was a fun read about a mathematician’s humorous examination of whether his subscriber growth is truly exponential. It was particularly interesting to me as I reach the end of my first calendar year of Substacking. I published my first essay on Substack on May 15, 2025. This December roundup essay will be the ninth. In addition to my own Substack, I published three commissioned pieces this year. While my Substack has shown no signs of exponential growth, I’m thankful to all my subscribers. As Suri concludes, I’ll be grateful if I can sustain long-term linear growth. Let’s not worry about the slope for now.
My year-end Conversations with Tyler review. I’ve been religiously listening to CWT since 2020. I listened to 33 out of the 36 episodes released in 2025. I missed two during our move from Seattle to Madison in late June. My top five episodes and the moments that stuck out the most:
Donald S. Lopez Jr. on Buddhism. Speaking on the misconception of Buddhism as a peaceful religion: “The peaceful religion part is something I think has been made much of, really, since the 19th century. But we have a lot of evidence of Buddhists going to war, of Buddhist monks serving as chaplains on the battlefield, even in the Second World War. Buddhism is a religion of peace in the sense that the Buddha really talked against violence, but we have, throughout Buddhist history, all sorts of Buddhist armies, Buddhist wars, and Buddhists killing each other and killing their enemies.”
Sheilagh Ogilvie on Epidemics, Guilds, and the Persistence of Bad Institutions. On Lady Mary Montagu raising the status of smallpox variolation in England: “She actually got an English doctor in Istanbul to do it to her two-year-old son. Then when she went back to England, she got it done. It was probably the first variolation in England. She got it done by the same doctor. She dragged him out of retirement and said, “Do my daughter.” He did it. Because she was a member of the aristocracy, she made it the cool thing to do. She was an early 18th-century influencer, and it became the fashionable thing for aristocratic ladies and then upwardly mobile people to do.”
Blake Scholl on Supersonic Flight and Fixing Broken Infrastructure. I liked this one partly because I was sitting in the front row at the Progress Conference where this episode was recorded. On how we could make moving through an airport happen faster: “I think about this in the shower like every day. There is a much better airport design that, as best I can tell, has never been built. Here’s the idea: You should put the terminals underground. Airside is above ground. Terminals are below ground. Imagine a design with two runways. There’s an arrival runway, departure runway. Traffic flows from arrival runway to departure runway. You don’t need tugs. You can delete a whole bunch of airport infrastructure. Imagine you pull into a gate. The jetway is actually an escalator that comes up from underneath the ground. Then you pull forward, so you can delete a whole bunch of claptrap that is just unnecessary. The terminal underground should have skylights so it can still be incredibly beautiful. If you model fundamentally the thing on a crossbar switch, there are a whole bunch of insights for how to make it radically more efficient. Sorry. This is a blog post I want to write one day. Actually, it’s an airport I want to build.”
John Amaechi on Leadership, the NBA, and Being Gay in Professional Sports. On his decision to become a professor: “Credibility. Credibility. I think there is just a part of it. You mentioned before, the monster kind of analogy. I do everything I can to help people see that I have something to offer them in this new context, and being a professor is part of that.” There’s something profound in this answer about how Amaechi understood the world’s perception of him.
Theodore Schwartz on Neurosurgery, Consciousness, and Brain-Computer Interfaces. Given that I am a neuroscientist, this one had to make it on the list. On neurosurgeons having excess self-confidence: “The way I talk about it is, you have to, at the same time, have confidence and humility. You have to combine the two, and it’s very difficult to do. Yes, you have to have a certain amount of confidence to tell someone sitting in front of you, “I want you to trust me with the most important thing that you possess, which is your brain and your health. You’re going to basically go to sleep and give it to me for four hours and put it under my care, and I’m going to do some risky stuff. You need to have that done, and I have to have the confidence to say I’m the best person to do this or I’m one of the best people to do this for you.” You have to feel that and you have to earn that.” There’s something ironic about discussing ego in neurosurgery, the brain both demands we have confidence in manipulating it and generates the ego that enables that confidence.
I’m glad to have discovered in the 2025 retrospective that Tyler enjoyed the Donald S. Lopez episode. I also enjoyed many of the CWT top ten mentioned in the retrospective, like the Steven Pinker, Dan Wang, and Ezra Klein episodes. But I come to CWT to find out about people and ideas I wouldn’t normally encounter.
Dwarkesh’s thoughts on AI progress. Dwarkesh is bearish on AI in the short term and bullish on AI in the long term. I admire the public evolution of his thinking on AI acceleration. In this post, he articulates a key paradox: if we’re close to AGI, why are labs investing billions in reinforcement learning environments to pre-train models on specific tasks like Excel or web browsing? Humans don’t need specialized training for every piece of software they’ll use. We are generalists and learn on the job. We don’t have to bake into humans every little tiny detail of mundane tasks. There is a higher level of abstraction that humans learn, which is currently not accounted for explicitly with RLVR. This capability gap in the form of continual learning and generalization from experience is why he expects true AGI is still a ways away, even as models become increasingly impressive on benchmarks.
Brian Potter wrote about how Bell Labs won its first Nobel Prize. As always, an excellent, in-depth story about a discovery/invention, this time about the discovery of electron diffraction. The highlight of this whole story is the messiness of science and the importance of surprise for discovery. One point I’d add: the importance of expert intuition in recognizing interesting surprises rather than dismissing them as technical errors and following curiosity in solving them.
The National Science Foundation put out a Request for Information on its new initiative called Tech Labs. The program aims to launch and scale independent research organizations by funding teams of scientists who will operate autonomously, that is, unconstrained by university administrative overhead. I’m glad this initiative is designed to diversify where funding goes. The government should experiment more with how, where, and to whom it allocates funding. However, this isn’t the first team-based grant mechanism (NIH’s U grants fund large teams, albeit structured differently). The initiative also claims to have a short and lightweight grant cycle, but three application rounds within five years seems anything but lightweight. While the R01 grant cycle has become long, once funded, you don’t reapply for 4-5 years. We’ll see how it unfolds. Here’s a link to a recording of their informational webinar.
Astera Institute announced the launch of a new neuroscience program. Doris Tsao, whose work I’ve followed and admired for years, will head the new program. I’m very excited about this! Astera is doing some amazing things.
Google DeepMind is building its first automated science laboratory, focused on material science research. I hope this lives up to its potential. Building a new lab from the ground up is a great way to reimagine what a laboratory will look like in the future.
Sebastian Seung, whose work (along with many others) enabled the fruit fly brain connectome, has a new startup called Memazing. Memazing aims to reverse engineer the fly brain and emulate it in software. They have a great starting point: a rich, comprehensive dataset of the whole fly brain in high resolution.
Ruxandra Teslo on Clinical Trial Abundance. These pieces shed light on the importance of reverse translation in biomedicine. Clinical trials are often viewed as binary yes/no decisions for drug approval. But in reality, clinical trials provide valuable human data that are key for scientific discovery. We need more feedback loops between the clinic and basic research.
Karthik Tadepalli’s Ideas Aren’t Getting Harder to Find. I found this an insightful synthesis of recent literature on the “ideas are getting harder to find” narrative. Ideas are in fact not getting harder to find, as evidenced by patent trends. Instead, we’ve become less effective at translating new ideas into the market for broader adoption. The market isn’t efficiently enabling more productive firms to outcompete less productive ones. He writes, “They find that almost all of the decline in allocative efficiency is explained by lagging firms failing to catch up. This tells us that the problem of declining allocative efficiency has a rather specific form: Less productive firms stay on as market leaders, while more productive firms are unable to catch up.”
Reinvent Science wrote about automating away all the administrators. Let’s work on automating science overhead instead of building yet another AI Scientist. From the piece: “AI scientists and self-driving labs are certainly hot areas in 2025, but they’re missing a critical point. The core mechanical activities of science like reading the literature, doing lab and field work, and writing up results have all repeatedly been improved by the arrival of better tools. Using AI as one more tool for this is so incremental it hurts.”
I’ve been enjoying the Saloni Dattani and Jacob Trefethen podcast, Hard Drugs. Specifically the “Will AI solve medicine?” episode. I generally don’t listen to podcasts much longer than an hour, but this one offered a refreshing, grounded discussion amid all the AI x Science headline hype.
I recently came across Cassidy Williams’s blog, where she’s writing a post every day in December. I found her simple list for generating blog post topics useful:
Arjun Raj’s essay on quitting projects in science. I fully agree with this. Scientists should quit projects more often. Given that projects can take 2-4 years to complete, you don’t get to work on many over the course of your career. Make what you work on worth your while. I made a significant pivot in my research direction earlier this year. There were many reasons why I left my old project behind (after working on it for three years), but one was a lack of excitement about the long-term research direction it was taking me toward.
I saw this post on Marginal Revolution the day after this fMRI paper went viral. The virality stemmed from fears that everything we’ve known about fMRI has been wrong. This is not entirely true. A few points that were on my mind:
The viral tweet misstates the paper’s claims. It suggests the fMRI signal was misaligned with underlying neural activity in 40% of people. What the paper actually reports: the BOLD signal in 40% of brain voxels (the 3D equivalent of pixels) showing a significant change exhibited opposing changes in oxygen metabolism. This corresponds to ~22% of all gray matter voxels analyzed.
The paper compares indirect measures of neural activity. It measures cerebral blood flow, cerebral metabolic rate of oxygen, and oxygen extraction factor, in addition to the BOLD signal. All of these only indirectly measure neural activity. The paper compares two indirect measures to each other without directly measuring neural activity itself. Previous papers have measured EEG alongside fMRI, providing a more direct comparison. Beyond neural activity (the firing of neurons), other processes can contribute to hemodynamic changes, including astrocyte activity and neuromodulators. What the paper does demonstrate is that hemodynamic responses are more heterogeneous than canonical models assume, particularly for negative BOLD signals. It’s a valuable contribution to understanding physiology, even if I think the claims about “neuronal activity” are overstated.
This paper fits within a rich history of neurovascular coupling research. There’s a long history of research on neurovascular coupling and what hemodynamic responses in fMRI indicate. This is one important paper, but it didn’t emerge in a vacuum. Thousands of scientists have tackled neurovascular coupling and fMRI interpretation. My own PhD work was part of this field. Knowing when and how to situate new papers within prior literature is crucial.
Continuing my literature review of high-resolution cellular structure with cryo-electron tomography (cryo-ET), I’ve been doing a deep dive into the work of Julia Mahamid and Bronwyn Lucas. Mahamid and Lucas have both advanced in situ cryo-ET to visualize macromolecules within intact cells rather than as isolated protein structures. Both use correlative approaches to bridge fluorescence microscopy with nanometer-resolution structural detail to reveal organizational principles across cellular compartments. I found this review to be a helpful broad overview of the current state and potential of cryo-ET.
Mahamid has a recent bioRxiv preprint capturing ribosomal complexes through the entire translation process, from initiation to recycling, within cells. This study is a major technical achievement in visualizing the protein synthesis machinery operating inside bacterial cells at near-atomic resolution. While scientists have studied translation extensively in test tubes, this work captures the full complexity of how ribosomes actually function within the crowded, dynamic cellular environment. The Mahamid group previously demonstrated atomic resolution imaging of ribosomes within bacterial cells.
To image cells under the electron microscope, they need to be thinned to ~200 nanometers. Cells, however, are much larger, in the micrometer range, and are thinned using a technique called focused ion beam (FIB) milling. Cell samples are placed inside a vacuum chamber, and an ion beam mills away the tops and bottoms of cellular regions of interest. This milling process can introduce artifacts in cellular structures that, if unaddressed, can impact downstream structure determination. This paper by Lucas provides an in-depth analysis of how the ion beam damages cellular structure during FIB milling. This isn’t particularly glamorous work, but it highlights the complexities and care required when resolving structures at the atomic scale within intricate cellular environments.
I recently read An Immense World by Ed Yong and loved it. It was eye opening to learn the specifics of how different animals sense the world around us. Scallops have mini DMDs for eyes! I am currently reading two books: Brian Potter’s book The Origins of Efficiency and Vaclav Smil’s Energy and Civilization: A History.
A few other misc December readings: English has become easier to read, Vibecession: Much More Than You Wanted To Know, The Boring Part of Bell Labs.
Finally, some year-end things:
It’s been exciting to start blogging about science and metascience this year. I’ve been publishing scientifically for over a decade, but 2025 was the first year I wrote more broadly about scientific progress. From writing for Asimov Press to becoming a writing fellow at the Roots of Progress Institute and attending the Progress Conference, 2025 was the year I became an active participant in the Progress community. This is the most intellectually stimulating and ambitious crowd I’ve been part of, and I spend most of my time around PhDs. I’m looking forward to contributing to these conversations in 2026.
This year also saw a pivot in my research from systems neuroscience to structural cellular biology. This came with a family move from Seattle to Madison. I’ve thoroughly enjoyed diving into the world of cryo-electron microscopy and tomography. It’s been fascinating transitioning from a lab where most instruments were custom builds to a field with standardized instrumentation. Standardized instruments don’t mean standardized workflows, though. Generating the perfect vitrified sample for imaging under the electron microscope is very much an art form. Every sample type, every cellular system requires its own optimization. My conviction for in situ cryo-electron tomography is strong. Understanding molecular machinery in its native context to bridge cellular and structural biology feels like the future, and I’m excited to be part of it.
Happy New Year from Bangalore! I hope you stick around for a lot more of my writing in 2026.

Cover image: Magic Mirror by M. C. Escher. Source.



