VP3 Home Server runs on your own hardware. Local AI inference with Ollama, brain backups, video rendering, workflow automation, and a full admin dashboard — nothing phones home.
Every module runs locally on your machine. Toggle them on or off from the Cortex Dashboard. No cloud dependencies required.
Run LLMs locally with Ollama. Pull models, switch between them, and process AI tasks without sending data to any cloud. Supports llama, mistral, codellama, and more.
Sync your VP3 brain data to your home server. Conversation history, soul files, knowledge base, and convo analysis — all backed up locally and encrypted.
Server-side video rendering with Remotion. Create social clips, title cards, audio visualizations, and promotional content directly from VP3.
Local speech-to-text with OpenAI Whisper. Transcribe recordings, voice notes, and live conversations without sending audio to external services.
Generate images locally. Product photos, avatars, thumbnails, and creative assets — all processed on your GPU without usage fees.
Local SQLite database for server-side storage. Brain data, render history, workflow logs, and plugin state — all queryable and portable.
Visual workflow automation engine. Chain AI tasks, file operations, API calls, and notifications into repeatable pipelines triggered by events or schedules.
Extend your home server with plugins. Install from the VP3 plugin store or build your own. Each plugin gets isolated storage and API access.
SMTP email integration for notifications, alerts, and scheduled reports. Get notified when renders complete, backups run, or AI tasks finish.
Connect your GitHub repos for automated deployments, backup syncing, and code analysis. Push brain data or workflow configs to version control.
Automated backup scheduling. Full server state, brain data, render outputs, and configuration — on your schedule, to your storage.
Join the VP3 compute mesh. Contribute GPU cycles for rendering and AI inference, earn VP3 energy, and distribute workloads across your devices.
Download, install dependencies, and start the server. The Cortex Dashboard opens at localhost:3077.
Requires Node.js 18+ • Runs on port 3077 • Auto-detects Ollama
Full admin dashboard for your home server. Monitor modules, manage AI models, browse brain backups, queue renders, configure workflows, and view real-time logs.