Agentic Driven Compute Solutions
Sign In
Home Server

Edge GPU for local AI on a tiny board.

Jetson Orin Nano (8 GB) is the cheapest serious GPU you can put on a shelf. VP3 supports Jetson as a Compute Agent — runs Ollama, Stable Diffusion, Whisper at real speeds, sips power, fits anywhere.

NVIDIA Jetson Orin Nano as an edge GPU node — runs local AI at real speeds, low power, shelf-sized.
What it does

NVIDIA Jetson at a glance.

CUDA + TensorRT
Jetson's GPU runs the same CUDA stack as a desktop card. Ollama, Stable Diffusion, and Whisper recognize it as a compute target and route jobs accordingly.
8 GB unified memory
Enough to run 7B-parameter LLMs and Stable Diffusion 1.5 comfortably. Larger models route to your desktop via Compute Agents.
Power-frugal
10-15 W under load. Fanless cases work. Always-on AI inference without the electric bill of a desktop GPU.
Auto-detected
Plug the Jetson onto your network, run the pairing CLI, it shows up in Compute Agents with its capability set already populated.

Add NVIDIA Jetson to your agent.

Every VP3 plan unlocks the full c:// drive catalog. Pick a plan, install the home server, flip the app on.

Choose Your Plan