rate my nix config  ·  lint + RAG + local LLM  ·  100% open source  ·  zero cloud

paste your config — flake.nix · configuration.nix · home.nix
or ⌘ Enter

How It Works
1. LINT statix + deadnix run deterministically over your config → finds redundant defaults, unused bindings, deprecated attrs, useless parens 2. RETRIEVE findings + attr paths → embedding → cosine search over nixpkgs corpus → retrieves relevant package metadata & NixOS option docs from the local index 3. REVIEW lint findings + retrieved context → local LLM (hermes3:3b via Ollama) → writes natural language review comments with line numbers and fix suggestions
98k
nixpkgs packages
16k
NixOS options
114k
total corpus rows
0
cloud API calls
The Corpus

Scraped directly from a local nixpkgs-unstable clone. 98,382 packages enumerated via nix-env -qaP --json and 16,095 NixOS module options via nix-instantiate --eval. Pinned to commit b12141ef. Exported as a public Hugging Face dataset at OpenxAILabs/nix-corpus.

The Model

No fine-tuning. The deterministic linters (statix, deadnix) do the heavy lifting — they find real bugs before the LLM sees anything. The LLM's only job is converting structured findings into readable prose. hermes3:3b running locally via Ollama — zero API cost, fully sovereign.

The RAG Index

Every package description and option doc is embedded with nomic-embed-text (768 dims) and stored as numpy arrays. At review time, findings are embedded and cosine-searched to pull relevant nixpkgs context into the prompt — grounding the LLM in real package metadata rather than hallucinated facts.

Why Not Fine-Tuning?

Fine-tuning a 3B model needs ~50k labelled examples and a GPU. This tool needs neither. statix already knows the Nix grammar rules. The RAG index already knows the nixpkgs API surface. The LLM just needs to write one clear sentence per finding — a task any 1B model handles well with good few-shot prompting.

Infrastructure

Runs on a sovereign Openmesh Xnode (16 GB RAM, x86 CPU). Deployed as a NixOS container via flake.nix. Shares Ollama with sibling apps — one model server, multiple consumers. No GPU required. Inference for a typical config takes 5–15 seconds on CPU.

Roadmap

v1: statix + deadnix lint, RAG retrieval, hermes3:3b prose.
v2: source file reader (Pass B), build-eval harness, Qwen 2.5 Coder 3B upgrade.
v3: home-manager support (Pass G), streaming output, VS Code extension.