|
Sebastian Bordt
Hi! I'm a postdoctoral researcher interested in large language models and interpretability. I work in the
theory of machine learning group at the University of Tübingen, supervised by Ulrike von Luxburg.
In 2023, I interned with Rich Caruana at Microsoft Research.
Currently, I'm very interested in a principled understanding of LLMs, including evaluation.
If you are interested in this topic, take a look at this recent pre-print, and our COLM'24 paper.
At Microsoft Research, we did some experiments with GPT-4 and GAMs in a
healthcare setting. In Tübingen, we had ChatGPT blindly participate
in a computer science exam.
During my PhD, I worked on a variety of different topics in explainable machine learning. For example, I haved worked on the connections between post-hoc methods and
interpretable models, and on the suitability of
explanation
algorithms for regulation. If you are interested in these topic, take a look at this blog
post. In a different line of work, we ask when
salincy
maps for
image classifiers are perceptually aligned, and
how this is connected with robust models.
Prior to my work in machine learning, I obtained Master's degrees in Mathematics and
Economics at TUM and LMU in
Munich. I also spent some time at the Munich Graduate School of Economics.
    
    
    
    
|
- October 2023: We will present our paper "Elephants Never Forget: Testing Language
Models for Memorization of Tabular Data"
at the Table Representation Learning Workshop at NeurIPS.
- September 2023: Our paper "Which
Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold
Robustness" was highlighted as a Spotlight at Neurips.
- July 2023: Visit the AdvML Frontiers
workshop at ICML to see our paper "Which
Models
have Perceptually-Aligned Gradients? An Explanation via Off-Manifold
Robustness".
- June 2023: At CVPR in Vancouver to present our paper "The Manifold
Hypothesis for Gradient-Based Explanations" which received a spotlight at the Workshop on
Explainable AI for Computer Vision.
- Summer 2023: I'm interning with Rich
Caruana at Microsoft Research in Seattle.
- April 2023: At AISTATS to present our paper
"From Shapley Values to
Generalized Additive
Models and back".
- April 2023: Visiting the group of Hima
Lakkaraju
at Harvard.
- April 2023: Invited Talk at the Workshop
on Machine Learning, Interpretability, and Logic
at the IDEAL
institute in Chicago.
- March 2023: Talking at the Workshop on Explainability in
Machine
Learning at the University
of Tübingen.
- November 2022: Speaking at the Nice Workshop on Interpretability.
- Summer 2022: Visiting the Simon's Institute for the Theory of Computing to
participate in the
Summer Cluster on Interpretable Machine
Learning.
- June 2022: At FAccT in Seoul to present our paper "Post-hoc explanations
fail to achieve their purpose in adversarial contexts".
- March 2022: Presenting our paper
"A Bandit Model for
Human-Machine Decision Making with Private Information and Opacity" at AISTATS.
- Statistics without Interpretation: A Sober Look at Explainable Machine Learning
Sebastian Bordt and Ulrike von Luxburg
arxiv pre-print, 2024
Paper
- LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs
Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis
Kellis, Rich Caruana
arxiv pre-print, 2023
Paper
Code
- ChatGPT Participates in a Computer Science Exam
Sebastian Bordt, Ulrike von Luxburg
arxiv pre-print, 2023
Paper
Code
Peer-Reviewed Publications
|
- How much can we forget about Data Contamination?
Sebastian Bordt, Suraj Srinivas, Valentyn Boreiko, Ulrike von Luxburg
ATTRIB Workshop at NeurIPS, 2024
Paper
- Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models
Sebastian Bordt, Harsha Nori, Vanessa Rodrigues, Besmira Nushi, Rich Caruana
COLM, 2024
Paper Code
- Data Science with LLMs and Interpretable Models
Sebastian Bordt, Ben Lengerich, Harsha Nori, Rich Caruana
XAI4Sci Workshop at AAAI-24, 2024
Paper Code (TalkToEBM)
- Elephants Never Forget: Testing Language Models for Memorization of Tabular Data
Sebastian Bordt, Harsha Nori, Rich Caruana
Table Representation Learning Workshop at NeurIPS, 2023
Paper
- Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold
Robustness
Suraj Srinivas*, Sebastian Bordt*, Hima Lakkaraju
NeurIPS, 2023
Spotlight
Paper Code
- The Manifold Hypothesis for Gradient-Based
Explanations
Sebastian Bordt, Uddeshya Upadhyay, Zeynep Akata, Ulrike von Luxburg
Explainable AI for Computer Vision (XAI4CV) Workshop at CVPR 2023
Spotlight
Paper
- From Shapley Values to Generalized Additive
Models and back
Sebastian Bordt, Ulrike von Luxburg
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Paper
Code
- Post-hoc explanations fail to achieve
their
purpose in adversarial contexts
Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike von Luxburg
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
Paper
Code
Blog
Post
Video
- A Bandit Model for Human-Machine Decision
Making with Private Information and Opacity
Sebastian Bordt, Ulrike von Luxburg
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Paper
- Recovery Guarantees for Kernel-based Clustering under Non-parametric Mixture
Models
Leena C. Vankadara, Sebastian Bordt, Ulrike von Luxburg, Debarghya Ghoshdastidar
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Oral Presentation
Paper
|