Some more O3-Pro fun, this time exploring some fascinating research on parallels between human cognition and large language models, highlighting how a “universal geometry of meaning” and latent space reveal similarities between the two.
This was a ‘co-authored’ experiment with O3-Pro — starting with initial research on my own, selective vetting, then through iterations, guided step-by-step to produce a paper. Google takes the credit for the podcast, and then I added a few touches to the final sound. My question was one that in some form or another I am having with many in the recent months: how to sustainably be, learn, and grow in this new AI-shaped world?
With ever-increasing AI slop online, it feels weird to post anything AI generated. OpenAI’s O3-Pro, however, feels refreshingly different—early users reported difficulty finding challenges complex enough to truly test its limits. So when I got access, I decided to see for myself: could O3-Pro clearly explain Karl Friston’s Free Energy Principle (Markov blankets, surprise spikes, belief updating, open systems) to a general audience? It produced a detailed explanation at an accessible reading level, which I fed into NotebookLM, layering the resulting podcast against a soundscape backdrop. A very impressive result, at least with with this use-case of O3-Pro!
More O3-Pro fun, (Google for part of it, including podcast generation), looking at conversational AI and mental health implications for certain vulnerable populations, and the rest.