Experience & work

What I've built

A decade of engineering, split roughly two ways: building AI and platform systems, and leading the larger cross-team projects that ship them. The project write-ups below have had internal system names, identifiers, and collaborator names generalised.

Experience

Where I've worked

From supply-chain integrations to AI for financial crimes, with a consistent pull toward the underlying systems.

Software Engineer & Tech Lead Robinhood Markets Mar 2022 to Present

Tech lead in Financial Crimes Engineering. These days I build LLM systems for investigations; before that, I led the cross-team platform and launch projects underneath them.

  • Built greenfield LLM pipelines that automate adverse-media research and fraud-typology classification for investigators, saving on the order of 1,000 hours of investigator time a month.
  • Built and shipped a fleet of autonomous coding agents the team uses day to day, contributing roughly a 25% lift in team velocity.
  • Led the in-house rebuild of regulatory filing as the team deprecated its Actimize vendor stack, shipping SAR filing to FinCEN and STR filing to Canada's FINTRAC.
  • Designed and led the team behind a self-serve detection platform, cutting detection latency from over 24 hours to under 30 minutes.
  • Drove the simultaneous, incident-free launch of crypto transaction screening across the EU and UK, owning internationalisation for customer onboarding and money movement.
  • Led a team of L4 engineers building the unified alert-management platform that became the backbone for the rest of FinCrimes.
Software Engineer & Tech Lead Google Jan 2020 to Mar 2022

Tech lead for inventory and warehouse control systems in the Technical Infrastructure organisation.

  • Led four engineers and fifteen support engineers running the inventory system through Google's company-wide ERP migration from Oracle to SAP, coordinating changes across a lot of moving integration points at once.
Lead Software Engineer Bravo Reporting Systems Jan 2015 to Oct 2025

A startup that ran a SaaS app for activity-report generation across companies and project managers, through to its closure in October 2025.

  • Mentored new hires, getting them up to speed on feature work and refactors and giving them a real path into a software engineering career.
  • Cut page load times by 90% by reworking the database architecture and query logic.
  • Built and maintained a multi-stage CI/CD pipeline from scratch for fast, safe releases.
Senior Software Engineer SPS Commerce Apr 2016 to Aug 2019

A B2B SaaS company building supply-chain integration frameworks across retail, eCommerce, and fulfilment systems.

  • Designed and built containerised web apps for document translation.
  • Built integrations into ERP and storefront systems including Salesforce, NetSuite, and Magento.
  • Used AWS API Gateway and Lambda to migrate a monolith to a microservice architecture.
University of Minnesota
Aug 2010 to May 2014 · Minneapolis, MN
Major in Mathematics, minor in Computer Science.
University of Oxford
Spring 2014 · Oxford, UK
Semester abroad. Thesis: mathematically modelling traffic flow; also studied English scientific and military history.
Selected work

Six projects, two themes

The AI systems I've built, and the larger projects I've led. Each one as a problem → role → approach → impact write-up.

01 / 06 · AI 2024 to 2025

Financial Crimes LLM

LLM pipelines Python Greenfield Model evaluation

The problem

Investigators were doing two slow jobs by hand. One was adverse-media research: searching the web against customer details to surface negative news, inconsistently and without scaling. The other was fraud-typology classification: reading each fraud alert and labelling it (identity theft, account takeover, money mule, payment fraud) before anything could be automated on top of it.

My role

Greenfield owner of the LLM work, and the first sustained Python I shipped at Robinhood. I built the negative-news pipeline solo, then led its expansion into fraud typology.

Approach

  • A negative-news pipeline: search the web, scrape it through a proxy egress, summarise with an LLM, and stage results in object storage. Every hit is scored per attribute (name, occupation, location) so investigators see why it matched.
  • A fraud-typology layer on the same foundation: a classifier agent and an explainer agent per typology, the classifier tuned for precision and the explainer for readability.
  • A model-evaluation framework built alongside the classifiers, plus a comparison pipeline that measures the LLM's agreement with the existing rule engine, so a classifier change is a measurable claim.
  • Data-gathering modelled as dependency-ordered workflows so work parallelises safely, with a generated case timeline as the main human-readable output.
  • Legitimacy analysis as a counterweight: each classifier also weighs the evidence against its own typology.

Impact

The negative-news pipeline saves investigators up to 15 minutes a case across roughly 4,000 cases a month, on the order of 1,000 investigator-hours back every month. The fraud-typology layer auto-routes about 15% of fraud cases to a lighter-touch queue.

02 / 06 · AI 2026

AI Developer Tooling

Autonomous agents Developer tooling Standards authorship Team enablement

The problem

Autonomous coding agents were moving from novelty to real engineering tooling, but an agent is only as good as the ground truth it reasons from. Without a shared rubric and a single source of truth, every agent (and every human reviewer) worked from its own version of "right."

My role

One of the first on my team to push agents into everyday use, and the one who built our agent fleet and wrote the playbook for using them.

Approach

  • Authored the team's Go coding standards as a rule-ID catalogue (testing, logging, context, error-handling, code-organisation rules with stable IDs) so a review cites a rule instead of relitigating it.
  • Stood up the wiki the agents reason from, giving them an auditable source of truth instead of drifting tribal knowledge.
  • Built a fleet of autonomous coding agents: a Go implementation agent that follows those standards, a design-doc drafter, and a PR reviewer that cites rule IDs in its comments.
  • Wrote the team's guide to how and when to use the agents, premiered at a company all-hands.

Impact

Roughly a 25% lift in team velocity, and a personal multiple closer to 4x: a two-day task done in a couple of hours, a design doc drafted in fifteen minutes. One cross-team engineering proposal, written in about thirty minutes with an agent, was adopted by the partner team as a template for how to propose changes across codebases.

03 / 06 · Leadership 2025 to 2026

Replacing Actimize: In-House Regulatory Filing

Actimize deprecation Cross-team leadership Regulated systems Multi-region

The problem

Robinhood's financial-crimes operations ran on Actimize, a third-party vendor. Regulatory filing went through Actimize's managed filer, which meant every submission was gated on the vendor's uptime, field limits, acknowledgment handling, and pricing. The team set out to deprecate Actimize and rebuild its core capabilities in-house, and regulatory filing was the part that had to be exactly right.

My role

Architect and principal engineer on the in-house filing stack, and the first-credited engineer on the launch. The effort reached across half a dozen systems and teams; I drove the design and the coordination, and led other engineers building alongside me.

Approach

  • Built the in-house pipeline for filing Suspicious Activity Reports (SARs) to FinCEN: an XML generator against the published federal schema, tested across every required permutation (fiat vs. crypto, individual vs. entity, joint vs. solo, amendment vs. initial).
  • A secure file-transfer client as the canonical transport (upload the filing package to the regulator, poll for the acknowledgment, pull it back) with acknowledgment matching that drives a form state machine: the regulator's identifier on accept, the rejection reasons on reject.
  • Extended the same stack to Canada for Suspicious Transaction Report (STR) filing to FINTRAC, through a per-region module so adding a new regulator's schema, transport target, and validation rules didn't disturb the FinCEN path.
  • Bidirectional status mirroring with eventual consistency: the filer owns the lifecycle and writes status back; the system of record never gets ahead of it.
  • A 90-day parallel-filing window, with every report filed both through Actimize and the new stack and reconciled, so a correctness regression would surface before cutover.

Impact

The investigations team now files SARs to FinCEN and STRs to FINTRAC with the entire critical path owned in-house (generation, transport, acknowledgment, downstream case creation) and Actimize out of the regulatory-filing loop end to end. It's the keystone of the broader effort to retire the vendor.

04 / 06 · Leadership 2024

Self-Serve Detection Platform

Team lead Platform design Proto-first schema Distributed compute

The problem

A business user who wanted a new detection rule (say, flag anyone whose 30-day deposit volume crosses a threshold) had to file an engineering ticket and wait about a week for someone to compute the underlying data feature, test it, and ship it.

My role

I designed the platform and led the team that built it end to end: the schema, the pipeline, the rollout plan.

Approach

  • A proto-first feature schema, with features keyed by name, cadence, and entity, so rules reference them without hard-coding their shape.
  • A three-phase rollout: prove the pipeline on existing storage, move to a dedicated feature store as volume grew, then open a self-serve UI as the end state.
  • Orchestration split from compute: a scheduler orchestrates, a distributed engine calculates, watermarks tell downstream rule evaluation when to go.
  • Batch IDs traced through the rule engine so on-call can follow one batch from generation to alert.
  • A dedicated PII worker boundary keeping sensitive-data handling walled off.

Impact

Detection latency dropped from over 24 hours to under 30 minutes, and a new rule became a config change instead of an engineering project. The bottleneck moved from engineering capacity to ordinary platform upkeep.

05 / 06 · Leadership 2023

EU + UK Crypto Screening Launch

Cross-team Crypto screening Internationalisation Zero downtime

The problem

Robinhood was bringing crypto to two new regulatory regions at once (the EU and the UK) and each needed financial-crimes screening on crypto customers and crypto transactions: sanctions, politically-exposed-person, and adverse-media checks, with rules specific to each jurisdiction. The existing crypto-screening path called a vendor directly, baked regional logic into the identity system, and needed an engineering cycle for every rule change. It wasn't going to survive two simultaneous launches.

My role

Primary engineer, end-to-end owner, and the person driving the simultaneous go-live. I owned internationalisation of crypto screening for both customer onboarding and money movement.

Approach

  • Built a dedicated crypto-screening service for customer and money-movement screening, driven by dynamic runtime configuration so business partners could tune jurisdiction rules without a deploy. That decision made a same-day dual-region launch possible.
  • Modelled locality as a typed field across the service boundary so regions couldn't drift.
  • Migrated the identity system off its direct vendor calls onto the new service.
  • Built dependency-injected, mockable clients throughout, and a monitoring dashboard for launch day.

Impact

Both regions went live the same day, with no downtime and no incidents. Every crypto customer was able to onboard and move funds, at around 10 requests a second peak EU load. It unblocked the EU and UK crypto launches, moved rule-tuning to business partners, and the config-driven design later let the whole service be handed off to another team cleanly.

06 / 06 · Leadership 2023

Alert-Management Platform

Team lead Backbone service gRPC Foundational patterns

The problem

Financial-crimes alerts moved through a patchwork of vendor-dependent, ad-hoc paths. There was no Robinhood-owned service acting as the case-grouping and enrichment layer, no single place to emit to or read from. Every system that integrated with FinCrimes added another integration point, and investigators opened cases that were thin on context.

My role

Primary engineer on the build-out, leading a team of L4 engineers. My first time leading a sub-team inside a larger project.

Approach

  • Bootstrapped and built the backbone service end to end, from empty infrastructure to production gRPC endpoints with streaming consumers, a relational store, runtime-configurable behaviour, and full observability.
  • Gave upstream systems a single onboarding path and one contract, collapsing the integration points the team had to maintain.
  • Enriched alert data into the case-management system so investigators opened cases with the context already attached.
  • Baked in idempotency and duplicate handling early, and set the service patterns (context-first RPCs, mock-based tests, dependency-injected clients, runtime config) that every later FinCrimes service inherited.

Impact

Cut onboarding time for other Robinhood systems, reduced the integration points to maintain, and through richer case data cut investigator time on cases by nearly a month, around a 30% drop in investigation time for alerts running through the platform. It became the substrate nearly every later FinCrimes system was built on.

Also

A few more things

Crypto Screening Resilience

The crypto screening system leaned on a single analysis vendor, a single point of failure a formal risk assessment had flagged as top-tier. I pitched a second vendor onto the roadmap on resilience grounds, designed a unified detection interface, and led the integration and roughly 10,000-line refactor across the partner teams that depended on it. It closed the risk, added a failover, and unlocked crypto in the EU.

Go Coding Standards

Authored the team's Go coding standards as a rule-ID catalogue: testing, logging, context, error-handling, code organisation. Every reviewer, human or agent, now checks against the same rubric.

Distributed Tracing & Observability

Drove the rollout of distributed tracing and structured monitoring across the team's services, with standard sampling tiers and layouts so new services could adopt them with no design discussion. A post-mortem can now follow one request across every service boundary.

Actimize Data Migration

Authored the design and built the importer that migrated millions of historical alerts and cases off Actimize into Robinhood's own case-management model: a mapper-per-category framework, object-storage staging, and a scheduled worker. Part of the same vendor-deprecation effort.

Skills

The toolkit

Picked up across a decade of shipping, heavier on AI and systems work lately, but the foundations still do a lot of the work.

AI & LLMs

LLM pipelines Prompt engineering Agentic workflows Model evaluation Retrieval & summarisation AI developer tooling

Languages

Go Python TypeScript Java Rust PHP

Infrastructure & Systems

AWS Kubernetes Docker Serverless gRPC Kafka Airflow Spark CI/CD SQL / PostgreSQL Protocol Buffers

Leadership & Craft

Technical leadership Cross-team delivery Architecture & design docs Code review & standards Mentorship Observability & tracing Statistics & financial math