Global compute fabric

Request compute capacity, not servers.

Run apps, APIs, workers, batch and AI workloads on a global compute fabric with wallet-native billing, protected uptime, and hybrid capacity pools.

Billing unit

1 NLC = 1 USD

Stable internal compute credit with nanoNLC accounting.

Trust rule

No idle variable spend

No workload, no reserve, no variable spend.

Live event stream
healthy
Card funding and NLR bonus rails
Unified workload API and SDKs
Per-second pricing with quote-before-deploy
How it works

From funding to execution in five steps

Wallet-native identity and capacity requests keep the workflow direct without exposing raw nodes.

01

Add funds

Card funding converts into NLC, and NLR-funded top-ups can receive a token incentive without changing compute billing.

02

Request capacity

The quote engine prices CPU, RAM, VRAM, bandwidth and availability.

03

Launch a workload

Launch a single workload object whether it runs as a long-lived service or a finite execution.

04

Scale across the network

Placement stays pool-based. Customers request capacity, not machines.

05

Pay for use or reserve

Protected reserve is explicit. Otherwise variable spend stays at zero when idle.

Platform pillars

Four clear surfaces around one compute fabric

Long-running services, finite execution, direct SDK control and the worker supply layer all stay under one workload model.

Service workloads

Service workloads

Run APIs, bots, services and backend systems as long-running workloads with optional protected uptime.

Finite workloads

Finite workloads

Run jobs, pipelines and burst inference without keeping capacity online when nothing is running.

SDK and API

SDK and API

Request capacity, deploy workloads, stream logs and manage usage with simple primitives.

Worker supply

Worker supply

Turn controlled local hardware into network capacity and choose how much of your machine you share.

Why NoirLedger

Built like modern infrastructure, not a VPS marketplace

The product sells compute capacity, protected uptime and distributed execution under a calm, premium control plane.

Compute pools, not single nodes

Hybrid supply under one control plane

Wallet-native credits and network asset

Protected uptime options

Developer-first primitives

Simple quoting and live burn tracking

Live event stream
healthy
[control-plane] pool fit confirmed for protected runtime
[billing] reserve accounting active for replicas_min=2
[wallet] top-up settled into NLC ledger account
[metrics] burn rate updated: 0.030861 NLC/sec
[worker-gateway] trusted core capacity healthy across 3 pools
Platform readiness

Built as a full platform, not just a landing page

Identity, ledger correctness, control-plane policy, worker trust and provider abstraction all shape the product from day one.

Identity and access

Account auth, wallet linking and session controls are part of the product foundation, not an afterthought.

Email verification and reset flows
TOTP 2FA and recovery codes
Session management and audit events
Step-up auth for sensitive actions

Billing and ledger

NLC remains the stable compute unit with ledger-grade accounting and reserve separated from variable usage.

nanoNLC integer accounting
Immutable transaction log
Reserve vs variable spend separation
No workload, no reserve, no variable spend

Control plane and execution

Quotes, placement, workload records and protected policy are designed as one operational surface.

Quote-before-deploy flow
Unified workload state model
Protected reserve enforcement
Trusted-core execution first

Worker trust and rails

Worker growth depends on trust controls, payout abstraction and provider capability checks.

Capability declaration and heartbeat
Reputation, quarantine and mismatch detection
Provider-agnostic funding and payouts
NLR isolated from execution path
Pricing preview

Quote before deploy

The pricing engine stays simple on the surface even when reserve and availability logic are involved.

best effort

API runtime

2 cores, 4 GB RAM, 0 GB VRAM

0.1756 NLC/hour
126.48 NLC/month
protected

Protected inference endpoint

4 cores, 8 GB RAM, 8 GB VRAM

2.8111 NLC/hour
2024.01 NLC/month
best effort

Batch AI inference

6 cores, 16 GB RAM, 12 GB VRAM

1.9230 NLC/hour
1384.59 NLC/month

Quote preview

0.1756 NLC/hour

126.48 NLC/month estimate

Current burn0.000048 NLC/sec
Active allocation0.1756 NLC/hour
Protected reserve0.0000 NLC/hour
Placement policynear | inferred from ingress

No workload, no reserve, no variable spend.

Developer primitives

Low-friction APIs and SDKs

Start with quotes and a unified workload object, then expand into logs, metrics and billing data.

JavaScript
ts
import { NoirLedger } from "@noirledger/sdk";

const client = new NoirLedger({ apiKey: process.env.NOIRLEDGER_API_KEY! });

const quote = await client.createQuote({
  lane: "runtime",
  availability_class: "protected",
  region_mode: "specific",
  region: "us-east-1",
  cpu_cores: 4,
  cpu_ghz: 3.2,
  ram_gb: 8,
  vram_gb: 8,
  gpu_tier: "G2"
});
Python
py
from noirledger import NoirLedger

client = NoirLedger(api_key="YOUR_API_KEY")

workload = client.create_workload(
    project_id="proj_123",
    quote_id="qte_123",
    image="ghcr.io/acme/worker:latest",
    command=["python", "run.py"],
)
Quote request
bash
curl -X POST http://localhost:8080/v1/workloads \
  -H "Authorization: Bearer $NOIRLEDGER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "proj_core_api",
    "quote_id": "qte_123",
    "image": "ghcr.io/acme/api:latest",
    "endpoint": { "type": "http", "port": 8080 },
    "replicas_min": 2
  }'
Deploy CLI
bash
noir deploy workload \
  --project proj_core_api \
  --quote qte_123 \
  --image ghcr.io/acme/api:latest \
  --endpoint http:8080
Worker lane

Controlled local hardware becomes network capacity

Workers declare limits, hardware profile, accepted workload classes and payout route without exposing the node-selection problem to customers.

What hardware can I share?

CPU, RAM and GPU-backed capacity with local limits, uptime windows and workload filters.

How do earnings work?

Workers accrue gross NLR from useful work or protected reserve, then NoirLedger applies a 10% network fee before payout.

How do payouts work?

Workers can withdraw automatically in NLR on Base with only chain costs, or choose manual bank settlement with the 10% network fee plus a 5% bank fee.

Worker alpha

Protected supply starts with trusted core

External workers expand margin and market coverage after core capacity is stable. Useful work and reserved protected capacity drive payouts.

Shared CPU example16 cores
Shared VRAM example24 GB
Payout routesNLR direct or bank settlement
Network economy

One network asset. One stable compute unit.

NLR powers worker rewards, future staking and network utility. NLC is the stable internal billing unit, pegged to compute credit value so pricing tables and invoices remain readable.

NLR

Network reserve asset for rewards, token-funded top-ups and future collateral.

NLC

Stable compute credit used for quotes, billing, usage and customer balance.

Execution roadmap

The public product grows in deliberate layers

Foundation first, customer MVP second, protected runtimes next, worker expansion only after core execution is stable.

Phase 0

Foundation

Monorepo, design system, marketing site and docs shell
Dashboard shell, auth shell and NLC ledger foundation
Persistent funding requests and core service boundaries
Phase 1

Customer MVP

Quote engine, wallet balance, runtime records and batch records
Stripe-first top-up flow and billing meter
Trusted-core execution and logs/metrics surfaces
Phase 2

Protected runtime MVP

Endpoint exposure and protected reserve billing
Replica minimum and health-based restart policy
Clear protected runtime pricing and controls
Phase 3

Worker alpha

Worker registration, heartbeat and capability declaration
Reputation scoring, quarantine and payout records
Curated external worker expansion after core stability