Reigniting My Star Wars Data Project
Moving from brittle API scraping and data loss to a sustainable local MediaWiki mirror using dumps, Docker, and Unraid.
I shelved this project for a while after a string of avoidable problems: unstable containers, MongoDB volume mishaps, API overuse worries, and a general sense that I was “hammering” the Star Wars Fandom wiki harder than felt respectful. This post covers how I rebooted the whole effort around a radically simpler (and more ethical) model: pull an official dump, stand up a local MediaWiki, and iterate offline.
C# (4)
Docker (3)
Unraid (3)
AI (2)
Plex (2)
1Password (2)
Blazor (2)
SVG Files (2)
Engineering (1)
Leadership (1)
Vibe Coding (1)
LLMs (1)
Star Wars (1)
MediaWiki (1)
Data Engineering (1)
ETL (1)
Backup (1)
AV1 (1)
HEVC (1)
Transcoding (1)
NVENC (1)
Quick Sync (1)
Media Server (1)
Compression (1)
Storage (1)
Home Lab (1)
Kubernetes (1)
ZFS (1)
Networking (1)
Game Servers (1)
CICD (1)
Dagger (1)
Github Actions (1)
Gitlab CI (1)
SWTOR (1)
MMORPG (1)
SSH (1)
Javascript (1)
Heroes of the Storm (1)
Razor (1)
Pagination (1)