Every Cortex Home Node is a full-stack compute engine running on your own hardware. No monthly fees for core features.
🧠
Local AI Inference (Ollama)
Run LLMs on your GPU — Llama, Mistral, CodeLlama, and more. Free, private, no API costs. OB1 routes here first before any cloud fallback.
🤖
Claude Proxy
When local AI can't handle a request, OB1 falls back to Claude via encrypted proxy. Source badge shows which AI answered in chat.
🎬
Video Rendering (Remotion)
Server-side video generation — social clips, audio visualizers, title cards. Rendered locally on your GPU with real-time progress tracking.
🔄
Brain Sync
Your personal knowledge base syncs between cloud and local. Memories, preferences, learned patterns — all stored in ~/.vp3/brain/.
🎤
Whisper Speech-to-Text
Local speech transcription. Voice commands and audio transcribed on your hardware — no audio sent to external servers.
🎨
Stable Diffusion
AI image generation from text prompts. Connect to AUTOMATIC1111 or ComfyUI running on your GPU for local image creation.
👤
Avatar Enhancer (LAM)
Tier 3 avatar generation — takes your rig spec and generates AI-enhanced nano sheets per body zone via local Stable Diffusion.
👀
Face Recognition
Local face detection via TensorFlow.js. Tag photos, verify identity, and match avatar references — all on your machine.
🌐
Tunnel Manager
Expose your home server to the internet via ngrok or Cloudflare. Quick tunnels (free, no account) or named tunnels for stable URLs.
🔌
Plugin Pipeline + Store
Extensible architecture with sandboxed plugins. Browse the built-in plugin store or import your own custom automation scripts.
⚙️
Workflow Engine
Multi-step automation workflows. Chain AI calls, file operations, and API requests into repeatable sequences.
📧
Email Module
IMAP/SMTP inbox — read, search, and AI-summarize email locally. Your mail stays on your machine, processed by your AI.