Joe Davison
Hi, I'm Joe. I'm an AI/ML engineer and researcher focused on language models, reasoning systems, and reliable agents. I'm especially interested in integrated memory, ultra-long context, inference scaling, RLVR, synthetic data generation, and self-play.I lead AI work at BambooHR, where I'm helping ship the next generation of Ask BambooHR. Previously I was on the Hugging Face science team—you may know my work if you've ever used the zero-shot classification pipeline or the research guide on calculating fixed-length model perplexity. I also spent time as a Senior ML Scientist at Enveda Biosciences, applying Transformers to mass-spec structure prediction for drug discovery.
I earned an MS in Data Science from Harvard, working with faculty including Sasha Rush and Finale Doshi-Velez, and spent a year in doctoral studies at the University of Utah with Vivek Srikumar.
The work that excites me most sits at the intersection of research and systems: models and agents that can search, remember, reason over long contexts, and improve through generated experience.
Scaling wins in the end.