Santiago Aranguri

CV
I am a Research Fellow at Goodfire working on model diffing for surfacing unexpected behavior in LLMs. Before that I did MATS training phase with Neel Nanda where I worked on new techniques for model diffing (SAEs on activation differences). I also worked on a variation of crosscoders for understanding how chat behavior arises from the base model.

I am a fourth-year PhD student at New York University. My previous research in the PhD was on scaling laws and phase transitions of diffusion models and neural networks with Eric Vanden-Eijnden and Arthur Jacot. I obtained my B.S. in Mathematics at Stanford University, where I worked on interacting particle systems with Amir Dembo

Publications

Optimizing Noise Schedules of Generative Models in High Dimensions
S. Aranguri, G. Biroli, M. Mezard, E. Vanden-Eijnden
Under review

Phase-aware Training Schedule Simplifies Learning in Flow-Based Generative Models
S. Aranguri, F. Insulla
ICLR 2025 Deep Generative Model in Machine Learning Workshop and Frontiers in Probabilistic Inference Workshop

Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
Z. Tu, S. Aranguri, A. Jacot
NeurIPS 2024

Untangling planar graphs and curves by staying positive
S. Aranguri, H. Chang, D. Fridman
ACM-SIAM Symposium on Discrete Algorithms 2022


Contact

You can contact me at aranguri@nyu.edu