
I'm George Ingebretsen, an EECS undergraduate at UC Berkeley doing what I can to make AI go well.
In the past: I've worked on multi-turn jailbreaks, singular learning theory, and interpretability, and have organized some AI conferences.
Here's my resume for more info on what I'm up to.
Currently:
- Interning at Berkeley's Center for Human Compatible AI
- Co-president of Berkeley's AI Safety club
- Finishing up an interpretability (SLT) project with Lucius Bushnaq at Apollo research
- Completing my degree early, ready for full-time work starting September 2025
I use this form for anonymous feedback/messages about how I can be better. I really appreciate people taking the time to fill it out, even if we're only acquaintances.
I'm always up to chat. Feel free to email or reach out on X/twitter.
I also sometimes post here
Research Publications
Also see my Google Scholar.
-
Emerging Vulnerabilities in Frontier Models: Multi-Turn Jailbreak Attacks
Tom Gibbs*, Ethan Kosak-Hine*, George Ingebretsen*, Jason Zhang, Julius Broomfield, Sara Pieri, Reihaneh Iranmanesh, Reihaneh Rabbany, Kellin Pelrine.- 📍 NeurIPS SafeGenAI, NeurIPS Red Teaming GenAI
-
Approximating the Local Learning Coefficient in Neural Networks: A Comparative Analysis of Power Series Expansion Orders
Advised by Lucius Bushnaq (Apollo).
Posts
Probably Not A Ghost Story
Making Little Simz Gorilla Interactive Music Video
Computer Apps I Recommend
subscribe via RSS