7+ years of experience in Machine Learning, Large Language Models, and Generative AI
I'm a Senior R&D Computer Scientist at Sandia National Laboratories with an active DOE Q Clearance. I received my MS in Computer Science from Georgia Tech, specializing in Machine Learning, and my BS in Computer Science from New Mexico Tech, graduating Summa Cum Laude.
My research focuses on large language models (LLMs), natural language understanding (NLU), generative AI, retrieval-augmented generation (RAG), natural language processing, and HPC-accelerated machine learning.
With over 7 years of work experience, I've developed state-of-the-art capabilities in LLM fine-tuning, natural language understanding, few-shot and zero-shot learning, RAG systems, and generative AI applications, leveraging high-performance computing environments and cutting-edge NLP techniques.
Interactive Skills Network
Drag nodes • Hover to explore connections
Academic papers, conference presentations, and technical reports
OSTI Technical Report - SAND-2024-14400R
SoftwareX Journal Publication
IEEE Access - Co-authored with K.W. Kliesner & S. Zenker
Georgia Tech - CS7643 Project
Georgia Tech - CS7641 Project
Georgia Tech - CSE8803DLT Project
Activate Conference Presentation
New Mexico Tech - CSE 441 Cryptography Project
New Mexico Tech - PSY 389 Computational Neuroscience Project
OSTI Technical Report
OSTI Technical Report
Showcasing technical innovation and creative exploration
Successfully launched NFT collection featuring generative art created with custom algorithms. Combines artistic creativity with blockchain technology.
View Collection →Interactive visualization tool for the Extended MNIST dataset, enabling exploration of handwritten character recognition patterns.
View Repo →Production-quality trading backtesting framework with realistic market simulation, including slippage models, execution delays, and comprehensive performance analysis.
Private RepoFull-featured compiler for the C- language built as part of CSE 423 Compilers course. Implements lexical analysis, parsing, AST generation, semantic analysis, and code optimization.
View Repo →PyTorch memory management framework that automatically handles models larger than GPU memory. Features automatic strategy selection, multi-GPU support, and CPU offloading.
View Repo →High-performance computing infrastructure for ML research, development, and personal projects
Explore machine learning concepts through interactive visualizations
Interactive machine learning demonstrations are currently in development.
Check back soon for hands-on visualizations of ML concepts!