Broadly speaking, I am interested in the intersection of computer science and neuroscience.

Research interests: Theoretical Neuroscience, Brain-Machine Interfaces, Hyperdimensional Computing, Reinforcement Learning, Neuromorphic Computing, ML in Education, Explainable AI

My interest in a more academic career path began after taking a neuroscience and discrete math class in high school. In the neuroscience class I was introduced to the idea of Brain-Machine Interfaces and read a brilliant book by Miguel Nicolelis called Beyond Boundaries. That summer I worked with my friend to build my first BMI (without any prior EE knowledge), which is the simple EEG-based device shown above.

As an undergraduate, I did my research at the Redwood Center for Theoretical Neuroscience. I mainly worked on two projects during my time there. The first involved assisting with computational modeling for the Yartsev lab. This focused on experimenting with different supervised learning methods to distinguish bat vocalizations from noise, as well as exploring unsupervised methods to understand structure in the vocalizations. The second project centered around hierarchical reinforcement learning and transfer learning for graph-based representations of MuJoCo environments.

I performed my graduate research under the supervision of Prof. Olshausen, who is the director of the Redwood Center for Theoretical Neuroscience. During my time as a graduate student I explored deep hierarchical reinforcement learning in the state-space domain with an interest in any relationships to hippocampal replay events and Michael Arbib's World Graph theory. I also investigated the capabilties of hyperdimensional computing in the context of natural language processing with a particular focus on its capabilities when used in parallel architectures and its performance relative to similar models.

Currently I do a variety of research at Lawrence Livermore National Lab that focuses on applying machine learning to a variety of different fields. The exact details are generally not public information, but in the past it has included exploring generative models for manifold learning and deep bayesian methods. In the future I'm looking to be more involved with the projects there that focus on neuromorpic computing and brain-machine interfaces. In my free time I'm pursuing my research interests noted above. Most recently that's involved working my way through a list of research papers that I was recommended as a grad student but didn't have time to read, some of which will be featured in my blog.