Sebastian Bordt
Postdoctoral Researcher in Machine Learning
Hi there! I'm a postdoctoral researcher interested in large language models and interpretability. I work in the theory of machine learning group at the University of Tübingen with Ulrike von Luxburg.
Most of my current work is empirical language model research. For example, we proposed a novel approach for how experimentation during pre-training can be made more efficient (ICLR'26), studied how data contamination impacts LLM evaluations (ICML'25), and investigated why large learning rates are effective for training transformers (NeurIPS'25).
During my PhD, I worked on a variety of different topics in explainable machine learning. For example, I have worked on the connections between post-hoc methods and interpretable models, and on the suitability of explanation algorithms for regulation. I wrote up my perspective on explainable machine learning in this ICML'25 position paper.
Recent News
Selected Publications
For a full list of publications, please see Google Scholar