Ryan Hopkins

Ryan Hopkins

AI Systems & Context Architect — I design reliable, human‑centered LLM workflows and ship full‑stack tools that turn messy data into decisions.

AI Systems Engineer Tech Design (UX/UI) Data Full‑Stack Context Engineering LLM Orchestration Docker / WSL (Ubuntu)
View My Work
Ryan Hopkins

Tech Stack

Node.js
React
Tailwind
LangChain LangChain
MongoDB
TensorFlow
Julia Julia
Google ADK Google ADK
Python Python
Google Cloud Platform GCP
Artificial Intelligence AI
Flask Flask
Ryan Hopkins portrait

About Me

I’m Ryan Hopkins — an AI systems engineer & context architect. I design reliable LLM workflows and ship full-stack + edge/offline systems, turning ambiguous specs into small, testable steps with clear checks.

I build with Node/Express, MongoDB, and EJS/Bootstrap, and newer stacks in Next.js + TypeScript with Tailwind/Framer. I develop on WSL2 (Ubuntu), containerize with Docker, and wire CI gates (lint/format, unit/E2E) into GitHub Actions.

On devices, my Raspberry Pi offline-learning work sharpened my edge/offline toolkit: Kolibri & Kiwix (ZIM) content packaging, multi-stage Docker and Packer/HCL imaging, artifact vs GitHub Releases versioning, and size-aware pipelines. I orchestrate ChatGPT (Senior Engineer/PM) with a Debugger LLM in a cross-model loop and use succinct handoff prompts to maintain continuity across sessions.

How I Use LLMs for Development

1 · Ideate
2 · Plan
3 · Build
4 · Debug
5 · Release

LLM Roles & Orchestration

  • ChatGPT → Senior Engineer / PM: architecture, code, CI/CD, docs, prompts.
  • Gemini → Senior Debugging Agent: analyzes stack traces, logs, env mismatches.
  • Arbitration loop: ChatGPT drafts the debugging prompt → Gemini returns analysis → ChatGPT verifies/contests → iterate to convergence.

Working Protocols

  • Small, testable steps with “Checks” after each command.
  • Stop-points for questions on type/runtime puzzles.
  • Eval & regression sets for prompt changes and LLM features.
  • Docs-first artifacts: READMEs, runbooks, handoff notes.

Cross-LLM Dialogue Loop

  1. ChatGPT composes context-rich debugger prompt.
  2. Gemini returns root-cause & fix plan.
  3. ChatGPT validates; if misaligned, critiques and re-tests.

Outcome: faster bug cycles, clearer reasoning traces.

Context Handoff Protocols

  • Project Handoff Prompt: goals, arch diagram, routes, env, known issues.
  • Context compression: brief history + current state + Next Steps.
  • LLM/session swaps without losing momentum.

CustomGPTs & Reuse

  • System prompts encode roles, tone, tools, and guardrails.
  • Reusable packs: Ticket Copilot, CS Copilot, AI Debugger Pro.
  • Process portability across projects (Achieve Reentry, Snapshot, Pi LMS).
Prompt Design Knowledge Packs Tool Patterns

Verification & QA

Lightweight evals, repeatable fixtures, and regression prompts to keep LLM features honest.

DevOps Integration

CI gates (lint/tests), Dockerized builds, and release notes generated with LLM assistance.

Documentation

Step-by-step guides, checks sections, and case-study write-ups for transparent handoffs.

Featured Projects

Capstone — Authoritarian Leadership & Employee Outcomes

MS in Industrial-Organizational Psychology capstone project. Explores how authoritarian leadership styles influence employee trust, performance, and wellbeing — blending research design, data analysis, and critical evaluation.

I-O Psychology Research Design SDT & Power Needs Surveys & Scales ANOVA / Stats Data Storytelling

Achieve DXP CS + Ticket Copilot — AI Triage, Docs & Automation

CustomGPT built for Nucleos Customer Success to speed ticket triage, turn unstructured logs into structured steps, and draft clear client updates—boosting consistency and reducing onboarding time.

Custom GPT Prompt Engineering Structuring Data Knowledge Base Design Process Documentation Manus AI

SkillSprint Coach — AI Micro-Learning Sprints & Habit Loops

CustomGPT that turns goals into 1–2 week learning sprints with checklists, spaced-practice prompts, and auto-generated reflection logs—helping users plan, execute, and track skill growth with minimal friction.

Custom GPT Prompt Engineering Micro-learning Design Sprint Planning Structured Templates Reflection & Tracking

Case Study · Achieve Reentry

Play Podcast

Intro Summary

A web platform for justice-impacted job seekers: save & track applications, analyze progress, and generate tailored résumés/cover letters with AI assistance.

  • • Hybrid job recommendation: heuristic pre-filters + LLM re-ranking with rationales.
  • • Instant résumé/cover-letter generation from profile + job description.
  • • Multi-model support (select different LLM providers) with safety guardrails.

My Role — Context Engineer

  • • Designed prompt system & “context packs” for ranking and document generation.
  • • Built guardrails (policy prompts + checks) and safe tool use patterns.
  • • Set up evaluation loops: prompt tests, edge-case suites, and quality criteria.
Landing & mobile flows Analytics & job-tracking

Super Admin Data Dashboard & AI Analyst

The admin suite captures novel quantitative signals across the job-search journey, provides interactive visualizations, and includes an AI Analyst that answers questions about the data using a guarded pipeline to minimize hallucinations— showing the generated aggregation pipeline and range so admins can verify results.

  • Filters by race/ethnicity, gender, conviction type; equity-gap and opportunity heatmaps.
  • Application analytics: totals, active pipeline, interviews, offers, rescinds; time-series trends.
  • AI Analyst: prompt templates + schema-aware context; pipeline preview + range disclosure for transparency.
AI Analyst – verified pipeline & range Admin dashboard – KPIs and equity analysis User summary table – outcomes and demographics

Architecture (high-level)

  • Multi-portal design: Separate flows for Individuals, Employers, and Admins with tailored dashboards.
  • Backend services: Node.js + Express API connected to dual MongoDB instances (identity & application data).
  • Frontend experience: EJS templating, Bootstrap, and Vanilla JS for fast, interactive CRUD and dashboards.
  • AI orchestration: Hybrid pipeline (heuristic filtering → LLM re-ranking) for job recommendations and career assistance.
  • Data dashboards: Admin-facing visualizations (ApexCharts + AG Grid) with built-in AI Analyst for equity gap analysis, trend insights, and hallucination-resistant outputs.
  • Security & privacy: Role-based access, session management, and user-controlled data visibility.

Tech Highlights

Technology Stack

  • Server: Node.js, Express.js
  • Database: Dual MongoDB instances (Identity/Profile DB + App DB)
  • Templating: EJS with ejs-mate layouts
  • Styling: Bootstrap 5 (npm), custom CSS
  • Frontend JS: Vanilla JS + page-specific helpers
  • AI Integration: @google/generative-ai, axios (OpenRouter)
  • Key Libraries: bcryptjs, express-session + connect-mongo, multer, nodemailer, connect-flash, method-override, FullCalendar, Choices.js, axios he
  • Data Visualization: ApexCharts, AG Grid
  • Environment: dotenv
  • Version Control: Git
Live demo (coming soon)

Case Study · Raspberry Pi Offline Learning

Intro Summary

A self-contained, offline-first learning hub built on Raspberry Pi. Serves Kolibri LMS and Kiwix ZIM archives over a local network—no internet required. The system is Dockerized, reproducible, and versioned via GitHub Releases, with modular content updates for secure facilities.

  • • Goals: accessibility in prison settings, zero-internet delivery, simple updates, and auditability.
  • • Outcomes: reproducible images, quicker field installs, controlled content curation, and low-touch maintenance.
  • • Process: multi-LLM workflow (ChatGPT ↔ Debugger LLM) with handoffs & verification checks.

My Role — Context Engineer

  • • Designed offline architecture, content packaging, and CI/CD release strategy.
  • • Built Docker/Packer (HCL) pipelines and GitHub Actions for artifact vs release flows.
  • • Ran cross-LLM debugging loops; authored templates for handoffs and reproducible fixes.
Raspberry Pi

Device + Classroom Topology

Raspberry Pi

Content Flow: ZIM → Kiwix → Local Wi-Fi

Offline Content & Sync Model

The platform packages open educational content into ZIM archives for Kiwix and channels for Kolibri. Content can be updated modularly without reflashing the device. For QA, ZIMs are validated with a local kiwix-serve instance before being promoted to a release.

  • Kiwix: host .zim archives (e.g., Wikipedia subsets, NASA pages) via kiwix-serve --port=8099 content.zim.
  • Kolibri: load curated channels (TED-Ed, CK-12, Khan Academy, MIT Blossoms, etc.); track learner progress offline.
  • Modular updates: swap or add ZIMs/channels; keep the base system stable.
  • Size strategy: separate “test artifacts” from “release images”; avoid bloating the build pipeline (160GB+ datasets kept external to CI build steps).
Content QA
kiwix-serve --port=8099 ./zim/test.zim
# open http://localhost:8099 for quick checks
Kolibri Import
# via UI or CLI (kolibri manage importchannel)
# test content & progress tracking fully offline
Release Hygiene
# artifacts for temporary QA
# GitHub Releases for versioned images + notes

Architecture (high-level)

  • Edge node: Raspberry Pi OS; local Wi-Fi/LAN; no WAN dependency.
  • Services: Kolibri (LMS) + Kiwix (ZIM server); system services start on boot.
  • Content store: external storage partition for large ZIMs/channels; hot-swappable.
  • Admin flow: web UI over the local network; image upgrades via GitHub Releases.
  • Security posture: minimal open ports; non-root services; secrets via env files; no telemetry by default.

Tech Highlights

Technology Stack

  • OS / Device: Raspberry Pi OS
  • LMS: Kolibri
  • Offline web: Kiwix + ZIM archives
  • Containerization: Docker (multi-stage)
  • Image build: Packer (HCL) for SD-card images
  • Scripting: Bash / WSL2
  • CI/CD: GitHub Actions (build, lint/tests, artifact uploads)
  • Releases: GitHub Releases (versioned images, changelogs)
  • LLM workflow: ChatGPT (Senior Engineer/PM) + Debugger LLM (cross-model loop)
Release notes (coming soon)

Case Study · Capstone (I-O Psychology)

Research Summary

Proposed a metrics-based leadership intervention to redirect authoritarian leaders’ power needs toward goal attainment—testing whether this boosts employee motivation & morale in a realistic organizational setting.

  • • Single-group quasi-experimental design (pre / post / follow-up).
  • • Validated instruments (e.g., PMI, UWES subscales); repeated-measures ANOVA.
  • • Theory-driven: McClelland’s Power Needs & Self-Determination Theory.

My Role — Research Lead

  • • Built the study design, measures, and analysis plan.
  • • Operationalized constructs; selected validated scales.
  • • Planned data handling, ethics, and dissemination.

Education & Certificates

SNHU

M.S. Industrial-Organizational Psychology

Southern New Hampshire University · 2023 – 2025

Experimental design and statistical analysis to improve workplace performance, leadership, and organizational change.

Vanderbilt University

AI Agent Developer

Vanderbilt University · 6-course specialization

Design and build AI agents in Python—agentic architectures, tool use & memory, custom GPTs, and responsible AI.

IBM

Generative AI for Software Developers

IBM · 3-course specialization

Practical path from GenAI fundamentals to prompt engineering and applying LLMs to software tasks (code generation, refactoring, testing) using tools like ChatGPT, GitHub Copilot, Gemini, and IBM watsonx.

Google Cloud

Digital Transformation Using AI/ML with Google Cloud

Google Cloud · 3-course specialization

Three-course series on cloud fundamentals, app modernization (containers/serverless/APIs), and managing ML projects—use cases, feasibility, and responsible AI.

Google

Google UX Design — Professional Certificate

Google · 7-course series

Seven-course path covering UX research, personas & journeys, wireframing/prototyping in Figma, usability testing & accessibility—culminating in a portfolio of three end-to-end projects.

Stanford Online

Introduction to Design Thinking

Stanford Online · d.school

Human-centered design methods from the d.school—empathy research, reframing, ideation, and rapid prototyping/testing to drive innovation.

Stanford Online

Product Management: Transforming Opportunities into Great Products

Stanford Online · d.school

Intro to the product lifecycle—from opportunity framing to launch and growth—covering problem selection, audience insight, roadmapping, and feature prioritization.

Stanford Online

Demand Creation: Launching and Growing Your Product

Stanford Online · d.school

Build a repeatable system for product growth—audience targeting, messaging, campaign strategy, growth tactics, and performance measurement to generate and sustain demand.

University of Michigan

Python for Everybody — Specialization

University of Michigan · 5-course specialization

Core Python through data structures, web data/APIs, and SQLite databases, capped by a project on data retrieval, processing, and visualization (taught by Dr. Charles Severance).

University of Cape Town

Julia Scientific Programming

University of Cape Town · 4-module course

Intro to Julia for scientific computing—syntax & types, multiple dispatch, arrays/loops, Jupyter notebooks, and packages like Plots, DataFrames, and Stats—applied to real data (Ebola case study).

Meta

Programming with JavaScript

Meta · course

JavaScript fundamentals through five modules—variables & data types, functions, objects/arrays, DOM & events, and unit testing with Jest.

University of Maryland

Product Management Essentials

University of Maryland · course

Defines the PM role and core responsibilities, key skills/competencies, and how PMs work with teams and stakeholders to build successful products.

Get in Touch

Want to collaborate or just say hi? Shoot me an email—I'm always excited to discuss AI, justice tech, or any opportunity to innovate.

Email Me