ryan@cooper:~

About Me

7+ years of experience in Machine Learning, Large Language Models, and Generative AI

Background

I'm a Senior R&D Computer Scientist at Sandia National Laboratories with an active DOE Q Clearance. I received my MS in Computer Science from Georgia Tech, specializing in Machine Learning, and my BS in Computer Science from New Mexico Tech, graduating Summa Cum Laude.

My research focuses on large language models (LLMs), natural language understanding (NLU), generative AI, retrieval-augmented generation (RAG), natural language processing, and HPC-accelerated machine learning.

With over 7 years of work experience, I've developed state-of-the-art capabilities in LLM fine-tuning, natural language understanding, few-shot and zero-shot learning, RAG systems, and generative AI applications, leveraging high-performance computing environments and cutting-edge NLP techniques.

Interactive Skills Network

Drag nodes • Hover to explore connections

Research & Publications

Academic papers, conference presentations, and technical reports

2024

Predictive Indicators of the Performance of Large Language Models

OSTI Technical Report - SAND-2024-14400R

2024

pyRoCS: A Python package to evaluate the resilience of complex systems

SoftwareX Journal Publication

2024

Contrastive Meta-Learner for Automatic Text Labeling and Semantic Textual Similarity

IEEE Access - Co-authored with K.W. Kliesner & S. Zenker

2021

CP2R: GPT-2 Conversational Pipeline using Relevance and Realism Discrimination

Georgia Tech - CS7643 Project

2021

Towards a Machine Learning-based Framework for Academic Paper Pre-evaluation

Georgia Tech - CS7641 Project

2020

Conspiracy-BERT: A Pre-Trained Language Model for Conspiracy Theory Tweets

Georgia Tech - CSE8803DLT Project

2019

Configuring Recommendations for Personalized Search at Sandia National Laboratories

Activate Conference Presentation

2019

Cryptanalysis of Digital Watermarking

New Mexico Tech - CSE 441 Cryptography Project

2019

Exploring the Capabilities and Possible Applications of Neural Turing Machines

New Mexico Tech - PSY 389 Computational Neuroscience Project

2017

Neural Network Ranking System for Personalized Enterprise Search

OSTI Technical Report

2015

Geospatial-Temporal Semantic Graph Search Template Generation via Data Mining

OSTI Technical Report

Featured Projects

Showcasing technical innovation and creative exploration

Flames by Fabled Fractals

Successfully launched NFT collection featuring generative art created with custom algorithms. Combines artistic creativity with blockchain technology.

View Collection →

EMNIST Visualization

Interactive visualization tool for the Extended MNIST dataset, enabling exploration of handwritten character recognition patterns.

View Repo →

Cryptocurrency Backtesting Framework

Production-quality trading backtesting framework with realistic market simulation, including slippage models, execution delays, and comprehensive performance analysis.

Private Repo

C- Language Compiler

Full-featured compiler for the C- language built as part of CSE 423 Compilers course. Implements lexical analysis, parsing, AST generation, semantic analysis, and code optimization.

View Repo →

Overflow

PyTorch memory management framework that automatically handles models larger than GPU memory. Features automatic strategy selection, multi-GPU support, and CPU offloading.

View Repo →

Homelab Infrastructure

High-performance computing infrastructure for ML research, development, and personal projects

ML Training Server

Active

Core Components

  • CPU: AMD Threadripper PRO 5955WX (16-core)
  • Memory: 128GB DDR4-3200 ECC
  • GPU: 3× NVIDIA RTX 3090 (72GB VRAM)

Use Cases

  • Large Language Model Training
  • Multi-GPU Deep Learning
  • Distributed Computing
  • Model Fine-tuning
1.6 kW Peak Power
90 TFLOPS FP32
72 GB Total VRAM

Main Workstation

Active

Core Components

  • CPU: AMD Ryzen 9 9950X3D (16-core)
  • Memory: 96GB DDR5-6000
  • GPU: ASUS ROG Strix RTX 3090
  • Storage: 2× 4TB NVMe PCIe 4.0

Features

  • 3D V-Cache Technology
  • HYTE Y70 Touch Case
  • Platinum PSU (1200W)
  • High-Speed Storage
5.7 GHz Boost Clock
144 MB Total Cache
8 TB NVMe Storage

Storage Server (NAS)

Active

Storage Configuration

  • Device: Synology DS2422+
  • Drives: 4× 24TB + 3× 16TB + 1× 8TB
  • Memory: 32GB ECC SODIMM
  • Protection: CyberPower UPS

Services

  • ML Dataset Storage
  • Media Server (Plex)
  • Backup Target
  • Docker Containers
152 TB Raw Capacity
10 GbE Network
SHR1 Protection

Interactive Demos

Explore machine learning concepts through interactive visualizations

Coming Soon

Interactive machine learning demonstrations are currently in development.

Check back soon for hands-on visualizations of ML concepts!

Get In Touch

Let's discuss machine learning, research collaborations, or interesting projects