The World's Idle Mac GPUs
Working For You
A decentralized marketplace for Apple M-series unified memory compute. Run Llama 3 70B on a Mac Mini M4 Pro for $0.07/hr. Earn HC credits while your Mac sleeps.
Mac Mini M4 Pro (48 GB) = two A100 80GB cards ($30,000+) for AI inference. No CUDA required. No cloud markup. Just Metal.
Apple Silicon — The Compute Advantage
Unified memory = GPU memory. No separate VRAM. A $1,400 Mac runs what needs $30,000 in NVIDIA cards.
| Mac Model | Unified Memory | GPU Cores | Best For | Est. Price |
|---|---|---|---|---|
| Mac Mini M4 | 16–24 GB | 10-core | Small inference, simulation | $0.03/hr |
| Mac Mini M4 ProPopular | 24–64 GB | 20-core | 30B LLM inference, SD | $0.07/hr |
| Mac Studio M4 Max | 64–128 GB | 40-core | 70B LLM inference | $0.12/hr |
| Mac Studio M4 Ultra | 128–192 GB | 80-core | Full 405B model shards | $0.22/hr |
Compared to AWS p4d.24xlarge (8× A100): $32.77/hr · No Apple Silicon support · 3× higher latency for LLM inference
Built for Real GPU Compute
Apple Silicon Advantage
Unified memory means a $1,400 Mac Mini M4 Pro runs Llama 3 70B — hardware that normally requires two $15,000 A100 cards.
Sandboxed Execution
Every job runs in a dedicated macOS sandbox-exec profile: restricted filesystem, localhost-only network, isolated OS user.
Zero-Copy GPU Access
MLX and PyTorch MPS use Metal directly. No CPU↔GPU copies, no driver headaches — full unified memory bandwidth from day one.
Decentralized Network
libp2p Kademlia DHT — no single point of failure. Providers connect peer-to-peer; the coordinator cluster is just a matchmaker.
Earn While You Sleep
Automatic idle detection via IOKit + CGSession. Your Mac starts accepting jobs only when screen is locked and GPU is idle.
HC Credit System
Off-chain HC credits for Phase 1. On-chain Arbitrum L2 token for trustless settlement in Phase 3 — no gas fees in the interim.
Run AI jobs in 3 steps
Submit Python scripts to the network. MLX, PyTorch MPS, ONNX CoreML — choose your runtime.
brew install hatch/tap/nmnm gpu list --min-ram 48 --runtime mlxnm job submit --runtime mlx --ram 48 ./inference.pyEarn while your Mac sleeps
Install the agent on your Mac Mini or Mac Studio. Idle detection is automatic — jobs only run when your screen is locked.
curl -fsSL https://raw.githubusercontent.com/wkang0223/neuralmesh/master/scripts/install-agent-macos.sh | bashnm provider config --idle-minutes 10 --floor-price 0.05nm provider startUse from Python or CLI
One command to install — works with any Python project or Jupyter notebook.
pip install hatch
import hatch as nm
nm.configure(account_id="your-id")
job = nm.submit(
script="./inference.py",
runtime="mlx",
ram_gb=48,
hours=2,
)
for line in job.stream_logs():
print(line, end="")# Install brew install hatch/tap/nm # Browse available Macs nm gpu list --min-ram 48 --runtime mlx # Submit a job nm job submit \ --runtime mlx \ --ram 48 \ --hours 2 \ ./llama_inference.py # Stream logs nm job logs <job-id> --follow
How Credits & Payments Work
Transparent, real-time accounting. Every HC is tracked — no hidden fees, no surprises.
Pay via Stripe or wire HC from Solana / Arbitrum. Credits appear instantly.
When a job is matched, your max budget is moved to escrow — safe, not spent.
The provider's GPU executes your job. Metered to the second.
92% to provider · 8% platform fee · unused budget refunded to you.
Providers cash out to their Solana or Arbitrum wallet. No lock-up.