Cloud Projects

Sports Odds Data Pipeline

I built an end-to-end, serverless AWS pipeline that ingests sportsbook odds, normalizes and validates records, stores both raw + curated datasets, and serves low-latency API responses for a live UI. The site auto-deploys via GitHub Actions to S3/CloudFront, and the data API is backed by API Gateway + Lambda + DynamoDB.

What this demonstrates

  • Event-driven ingestion + serverless ETL
  • Data modeling for query-efficient reads
  • CI/CD automation (infra + app)
  • Production-style concerns: CORS, caching, least privilege, and reliability

Tech AWS (S3, CloudFront, API Gateway, Lambda, DynamoDB, EventBridge, Glue), Python, Terraform, GitHub Actions

Outcomes Reliable ingestion → normalized data → low-latency API → live UI table

Live CI/CD enabled via GitHub Actions → S3 sync → CloudFront cache invalidation (automatic deploy on push)

Architecture Diagram: High-Level

Odds API/Feeds
EventBridge
schedule trigger
Lambda
ingest / validate
S3 Raw Zone
CloudFront + S3
projects.html table
API Gateway
/games
Lambda
calculate today's games + odds
DynamoDB
sport_date + game_id

Live Demo: Today's Games + Custom Bet

This table is generated dynamically from my API (API Gateway + Lambda) querying DynamoDB for today’s slate.

All times are in your local timezone.

Loading live demo data...

Last updated: --

Loading games...