I’m broadly interested in theoretical machine learning and decision making. Recently, I have been thinking about function approximation for decision making settings like contextual bandits and reinforcement learning, with the goal of characterizing the fundamental limits for these problems and understanding new algorithmic paradigms.
If you want to get in touch, you can reach me at: [gene at ttic dot edu].
You can find my (perpetually outdated) CV here.
- Starting a Google Student Researcher role in July 2023.
- Attended RL Summer School in Barcelona, Spain in June 2023.
- Three papers accepted to NeurIPS 2022. If you are interested, let us chat!
- Spent Summer 2022 at Princeton University, working with Prof. Jason Lee.
- Co-organized (with Kumar Kshitij Patel) the TTIC Student Workshop in August 2021.
- Spent Fall 2020 as a (virtual) visiting graduate student at the Theory of Reinforcement Learning program at the Simons Institute.
- Pessimism for Offline Linear Contextual Bandits using \(\ell_p\) Confidence Sets
Gene Li, Cong Ma, Nathan Srebro.
NeurIPS 2022. [Poster]
- Understanding the Eluder Dimension
Gene Li, Pritish Kamath, Dylan J. Foster, Nathan Srebro.
- Exponential Family Model-Based Reinforcement Learning via Score Matching
Gene Li, Junbo Li, Anmol Kabra, Nathan Srebro, Zhaoran Wang, Zhuoran Yang.
NeurIPS 2022. (Oral Presentation). [Poster]
- Statistical and Computational Learning Theory - Winter 2023
Instructor: Prof. Nathan Srebro, TTIC.
Some notes on learning theory. These notes are mostly for my own personal reference, but others may find them useful. Any mistakes are my own.
Website last updated July 2023.