Sebastian Bordt
profile photo

Sebastian Bordt

I'm a recently graduated Computer Science PhD student, supervised by Ulrike von Luxburg at the University of Tübingen. I currently work as a postdoctoral reseracher at the University of Tübingen. In 2023, I interned with Rich Caruana at Microsoft Research.

During my thesis, I worked on interpretable machine learning. For example, I haved worked on the connections between post-hoc methods and interpretable models, and on the suitability of explanation algorithms for regulation. If you are interested in these topic, take a look at this blog post. In a different line of work, we ask when salincy maps for image classifiers are perceptually aligned, and how this is connected with robust models.

Currently, I have become interested in Large Language Models, especially in the topic of evaluation. During my internship at Microsoft Research, we studied memorization and learning in LLMs. We also did some experiments with GPT-4 and GAMs in a healthcare setting. In Tübingen, we had ChatGPT blindly participate in a computer science exam.

Prior to my work in machine learning, I obtained Master's degrees in Mathematics and Economics at TUM and LMU in Munich. I also spent some time at the Munich Graduate School of Economics.

Mail      Scholar      Twitter      GitHub      cv

Recent News
Past Activities
  • October 2023: We will present our paper "Elephants Never Forget: Testing Language Models for Memorization of Tabular Data" at the Table Representation Learning Workshop at NeurIPS.
  • September 2023: Our paper "Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness" was highlighted as a Spotlight at Neurips.
  • July 2023: Visit the AdvML Frontiers workshop at ICML to see our paper "Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness".
  • June 2023: At CVPR in Vancouver to present our paper "The Manifold Hypothesis for Gradient-Based Explanations" which received a spotlight at the Workshop on Explainable AI for Computer Vision.
  • Summer 2023: I'm interning with Rich Caruana at Microsoft Research in Seattle.
  • April 2023: At AISTATS to present our paper "From Shapley Values to Generalized Additive Models and back".
  • April 2023: Visiting the group of Hima Lakkaraju at Harvard.
  • April 2023: Invited Talk at the Workshop on Machine Learning, Interpretability, and Logic at the IDEAL institute in Chicago.
  • March 2023: Talking at the Workshop on Explainability in Machine Learning at the University of Tübingen.
  • November 2022: Speaking at the Nice Workshop on Interpretability.
  • Summer 2022: Visiting the Simon's Institute for the Theory of Computing to participate in the Summer Cluster on Interpretable Machine Learning.
  • June 2022: At FAccT in Seoul to present our paper "Post-hoc explanations fail to achieve their purpose in adversarial contexts".
  • March 2022: Presenting our paper "A Bandit Model for Human-Machine Decision Making with Private Information and Opacity" at AISTATS.
Pre-Prints
  • Statistics without Interpretation: A Sober Look at Explainable Machine Learning
    Sebastian Bordt and Ulrike von Luxburg
    arxiv pre-print, 2024
    Paper
  • LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
    Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana
    arxiv pre-print, 2023
    Paper Code
  • ChatGPT Participates in a Computer Science Exam
    Sebastian Bordt, Ulrike von Luxburg
    arxiv pre-print, 2023
    Paper    Code
Peer-Reviewed Publications
  • Data Science with LLMs and Interpretable Models
    Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana
    XAI4Sci Workshop at AAAI-24, 2024
    Paper    Code (TalkToEBM)
  • Elephants Never Forget: Testing Language Models for Memorization of Tabular Data
    Sebastian Bordt, Harsha Nori, Rich Caruana
    Table Representation Learning Workshop at NeurIPS, 2023
  • Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
    Suraj Srinivas*, Sebastian Bordt*, Hima Lakkaraju
    NeurIPS, 2023
    Spotlight
    Paper    Code
  • The Manifold Hypothesis for Gradient-Based Explanations
    Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike von Luxburg
    Explainable AI for Computer Vision (XAI4CV) Workshop at CVPR 2023
    Spotlight
    Paper
  • From Shapley Values to Generalized Additive Models and back
    Sebastian Bordt, Ulrike von Luxburg
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
    Paper    Code
  • Post-hoc explanations fail to achieve their purpose in adversarial contexts
    Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike von Luxburg
    ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
    Paper    Code    Blog Post    Video
  • A Bandit Model for Human-Machine Decision Making with Private Information and Opacity
    Sebastian Bordt, Ulrike von Luxburg
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
    Paper
  • Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture Models
    Leena C. Vankadara, Sebastian Bordt, Ulrike von Luxburg, Debarghya Ghoshdastidar
    International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
    Oral Presentation
    Paper

Website template.