Why I Built Sentinel

Mar 10, 2026 23:52 · 888 words · 5 minute read productivity

I Got Tired of Losing Errors Across My Services, So I Built a Zero-Setup Error Tracker in Go

I built Sentinel for a very simple reason: I was tired of losing errors in my own development setup.

I was working across a FastAPI backend, Celery workers, Celery Beat, frontend, Docker, Redis, Postgres, and Qdrant. Every part of the system had its own logs. Every service had something to say. When things broke, the error was usually there somewhere, but finding it was often more painful than it should have been.

That was the last time.


I’ve been a developer long enough to know the pattern. Something breaks. You docker logs -f into the void. Or you’re tailing log files from your backend. Or staring at your dev server’s stderr as it vomits a wall of text. You grep. You scroll. You find the error 20 minutes later, half-covered by INFO spam. Then the same error shows up tomorrow and you do it all over again because you forgot what you found.

Yes, tools exist. Sentry, Datadog, Signoz. You can self-host Sentry. You can set up Jaeger for tracing. But every one of these requires instrumentation — install their SDK, wrap your code, configure DSNs or exporters, and for self-hosted Sentry you’re spinning up PostgreSQL, Redis, Kafka, and ClickHouse before you see your first error. That’s not a quick fix, that’s a side project.

I didn’t want any of that. I wanted something that works with the logs I already have. No SDK. No code changes. A single binary I could pipe output into and have it remember what broke.

So I built Sentinel.

Sentinel terminal output grouping local application errors

What It Actually Does

# Pipe anything into it
docker logs -f my_app | sentinel stdin
cat /var/log/app.log | sentinel stdin

# Tail log files with glob patterns
sentinel watch ./logs/*.log

# Or wrap your dev server directly
sentinel run -- npm run dev

Sentinel reads your logs — from stdin, from files, from a child process, wherever — detects errors and warnings, groups multiline stack traces (Python, Go, Node, Java, Rust, .NET, Kotlin), fingerprints recurring issues so the same NullPointerException doesn’t show up 47 times, and stores everything in a local SQLite database.

No cloud. No account. No YAML config files. One binary.

Want a UI? sentinel ui opens a local web dashboard at localhost:4040 where you can filter by project, severity, and status. Mark issues as investigating, resolved, or suppressed. When a resolved issue comes back, Sentinel auto-flags it as a regression.

The run command captures both stdout and stderr, tags the session with your git branch and commit, and attaches breadcrumb context — the 50 log lines that preceded each error. The watch command tails files with seek-based offset tracking and detects log rotation automatically.

Sentinel local web dashboard showing captured issues

Why Not Just grep (or Sentry, or Jaeger)

grep doesn’t remember. It doesn’t tell you “this is the same ConnectionRefusedError you saw 3 days ago, and it’s happened 14 times since.”

Sentry remembers, but it needs you to change your code. Add the SDK. Configure the DSN. Wrap your handlers. Self-host it? That’s Docker Compose with 20+ containers and a dedicated machine.

Jaeger gives you tracing, but it requires OpenTelemetry instrumentation — more SDKs, more code changes, more moving parts.

Sentinel sits at a different layer entirely. It reads your existing log output — the text your app already prints to stdout/stderr — and does the grouping, deduplication, and triage without touching your code. It strips UUIDs, timestamps, and request IDs before hashing so that Error processing order abc-123 and Error processing order def-456 correctly land in the same bucket. SHA-256 over the normalized message plus the top 3 stack frames. Stable across runs. Scoped per project so the same error in your billing service and your auth service stays separate.

The LLM Part (Optional, and That’s the Point)

If you have Ollama or LM Studio running locally, Sentinel will ask the model to triage each new error — categorize it, suggest a root cause, flag risk level. It’s useful. It’s also completely optional. Pass --offline and you get pure heuristic triage, zero network calls, zero latency.

The LLM integration uses a standard OpenAI-compatible API. Point it at anything: local Ollama, a remote endpoint, whatever. It’s your call.

The Stack

Go 1.23. Single binary, cross-compiled for macOS (Intel + Apple Silicon), Linux, and Windows. SQLite with WAL mode for concurrent reads. No CGO — pure Go SQLite driver. The web UI is built on oat.ink — an ultra-lightweight UI library — and embedded directly into the binary via go:embed. The entire thing compiles to one file you drop in your PATH.

Built-in OpenTelemetry receiver if you want trace correlation. Self-update command that pulls from GitHub releases with SHA-256 verification. AES-256 encrypted local storage for API keys.

Everything is local. Everything is fast. sentinel check --offline and you’re running in under a second.

Try It

It’s MIT licensed and the distributable is on GitHub:

https://sentinel.tinycrafts.ai

# Download, add to PATH, done
sentinel check --offline
sentinel run --offline -- your_app_command
sentinel ui

If you’re running multiple services locally and you’re tired of losing errors in the scroll, give it 5 minutes. That’s all it takes.


I built Sentinel to scratch my own itch. If it scratches yours too, star the repo or tell me what’s missing. I’m shipping updates.

tweet Share