Request compute capacity, not servers.
Run apps, APIs, workers, batch and AI workloads on a global compute fabric with wallet-native billing, protected uptime, and hybrid capacity pools.
Billing unit
1 NLC = 1 USD
Stable internal compute credit with nanoNLC accounting.
Trust rule
No idle variable spend
No workload, no reserve, no variable spend.
From funding to execution in five steps
Wallet-native identity and capacity requests keep the workflow direct without exposing raw nodes.
Add funds
Card funding converts into NLC, and NLR-funded top-ups can receive a token incentive without changing compute billing.
Request capacity
The quote engine prices CPU, RAM, VRAM, bandwidth and availability.
Launch a workload
Launch a single workload object whether it runs as a long-lived service or a finite execution.
Scale across the network
Placement stays pool-based. Customers request capacity, not machines.
Pay for use or reserve
Protected reserve is explicit. Otherwise variable spend stays at zero when idle.
Four clear surfaces around one compute fabric
Long-running services, finite execution, direct SDK control and the worker supply layer all stay under one workload model.
Service workloads
Run APIs, bots, services and backend systems as long-running workloads with optional protected uptime.
Finite workloads
Run jobs, pipelines and burst inference without keeping capacity online when nothing is running.
SDK and API
Request capacity, deploy workloads, stream logs and manage usage with simple primitives.
Worker supply
Turn controlled local hardware into network capacity and choose how much of your machine you share.
Built like modern infrastructure, not a VPS marketplace
The product sells compute capacity, protected uptime and distributed execution under a calm, premium control plane.
Compute pools, not single nodes
Hybrid supply under one control plane
Wallet-native credits and network asset
Protected uptime options
Developer-first primitives
Simple quoting and live burn tracking
Built as a full platform, not just a landing page
Identity, ledger correctness, control-plane policy, worker trust and provider abstraction all shape the product from day one.
Identity and access
Account auth, wallet linking and session controls are part of the product foundation, not an afterthought.
Billing and ledger
NLC remains the stable compute unit with ledger-grade accounting and reserve separated from variable usage.
Control plane and execution
Quotes, placement, workload records and protected policy are designed as one operational surface.
Worker trust and rails
Worker growth depends on trust controls, payout abstraction and provider capability checks.
Quote before deploy
The pricing engine stays simple on the surface even when reserve and availability logic are involved.
API runtime
2 cores, 4 GB RAM, 0 GB VRAM
Protected inference endpoint
4 cores, 8 GB RAM, 8 GB VRAM
Batch AI inference
6 cores, 16 GB RAM, 12 GB VRAM
Quote preview
0.1756 NLC/hour
126.48 NLC/month estimate
No workload, no reserve, no variable spend.
Low-friction APIs and SDKs
Start with quotes and a unified workload object, then expand into logs, metrics and billing data.
import { NoirLedger } from "@noirledger/sdk";
const client = new NoirLedger({ apiKey: process.env.NOIRLEDGER_API_KEY! });
const quote = await client.createQuote({
lane: "runtime",
availability_class: "protected",
region_mode: "specific",
region: "us-east-1",
cpu_cores: 4,
cpu_ghz: 3.2,
ram_gb: 8,
vram_gb: 8,
gpu_tier: "G2"
});from noirledger import NoirLedger
client = NoirLedger(api_key="YOUR_API_KEY")
workload = client.create_workload(
project_id="proj_123",
quote_id="qte_123",
image="ghcr.io/acme/worker:latest",
command=["python", "run.py"],
)curl -X POST http://localhost:8080/v1/workloads \
-H "Authorization: Bearer $NOIRLEDGER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"project_id": "proj_core_api",
"quote_id": "qte_123",
"image": "ghcr.io/acme/api:latest",
"endpoint": { "type": "http", "port": 8080 },
"replicas_min": 2
}'noir deploy workload \ --project proj_core_api \ --quote qte_123 \ --image ghcr.io/acme/api:latest \ --endpoint http:8080
Controlled local hardware becomes network capacity
Workers declare limits, hardware profile, accepted workload classes and payout route without exposing the node-selection problem to customers.
What hardware can I share?
CPU, RAM and GPU-backed capacity with local limits, uptime windows and workload filters.
How do earnings work?
Workers accrue gross NLR from useful work or protected reserve, then NoirLedger applies a 10% network fee before payout.
How do payouts work?
Workers can withdraw automatically in NLR on Base with only chain costs, or choose manual bank settlement with the 10% network fee plus a 5% bank fee.
Protected supply starts with trusted core
External workers expand margin and market coverage after core capacity is stable. Useful work and reserved protected capacity drive payouts.
One network asset. One stable compute unit.
NLR powers worker rewards, future staking and network utility. NLC is the stable internal billing unit, pegged to compute credit value so pricing tables and invoices remain readable.
NLR
Network reserve asset for rewards, token-funded top-ups and future collateral.
NLC
Stable compute credit used for quotes, billing, usage and customer balance.
The public product grows in deliberate layers
Foundation first, customer MVP second, protected runtimes next, worker expansion only after core execution is stable.