Sebastian Bordt

Sebastian Bordt

Postdoctoral Researcher in Machine Learning

Hi there! I'm a postdoctoral researcher interested in large language models and interpretability. I work in the theory of machine learning group at the University of Tübingen with Ulrike von Luxburg.

Most of my current work is empirical language model research. For example, we proposed a novel approach for how experimentation during pre-training can be made more efficient (ICLR'26), studied how data contamination impacts LLM evaluations (ICML'25), and investigated why large learning rates are effective for training transformers (NeurIPS'25).

During my PhD, I worked on a variety of different topics in explainable machine learning. For example, I have worked on the connections between post-hoc methods and interpretable models, and on the suitability of explanation algorithms for regulation. I wrote up my perspective on explainable machine learning in this ICML'25 position paper.

Recent News

Selected Publications

For a full list of publications, please see Google Scholar

On the Surprising Effectiveness of Large Learning Rates under Standard Width Scaling

Moritz Haas, Sebastian Bordt, Ulrike von Luxburg, Leena Chennuru Vankadara

NeurIPS 2025

Spotlight

How Much Can We Forget about Data Contamination?

Sebastian Bordt, Suraj Srinivas, Valentyn Boreiko, Ulrike von Luxburg

ICML 2025

Position: Rethinking Explainable Machine Learning as Applied Statistics

Sebastian Bordt, Eric Raidl, Ulrike von Luxburg

ICML 2025

Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models

Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana

COLM 2024

Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness

Suraj Srinivas*, Sebastian Bordt*, Hima Lakkaraju

NeurIPS 2023

Spotlight

From Shapley Values to Generalized Additive Models and back

Sebastian Bordt, Ulrike von Luxburg

AISTATS 2023