Become an Inference Provider
Transform your AI services into verifiable, revenue-generating endpoints on the 0G Compute Network. This guide covers setting up your service and connecting it through the provider broker.
Why Become a Provider?
- Monetize Your Infrastructure: Turn idle GPU resources into revenue
- Automated Settlements: The broker handles billing and payments automatically
- Trust Through Verification: Offer verifiable services for premium rates
Prerequisites
- Docker Compose 1.27+
- OpenAI-compatible model service
- Wallet with 0G tokens for gas fees
Setup Process
Prepare Your Model Service
Service Interface Requirements
Your AI service must implement the OpenAI API Interface for compatibility. This ensures consistent user experience across all providers.
Verification Interfaces
To ensure the integrity and trustworthiness of services, different verification mechanisms are employed. Each mechanism comes with its own specific set of protocols and requirements to ensure service verification and security.
- TEE Verification (TeeML)
- OPML, ZKML (Coming Soon)
TEE (Trusted Execution Environment) verification ensures your computations are tamper-proof. Services running in TEE:
- Generate signing keys within the secure environment
- Provide CPU and GPU attestations
- Sign all inference results
These attestations should include the public key of the signing key, verifying its creation within the TEE. All inference results must be signed with this signing key.
Hardware Requirements
- CPU: Intel TDX (Trusted Domain Extensions) enabled
- GPU: NVIDIA H100 or H200 with TEE support
TEE Node Setup
There are two ways to start a TEE node for your inference service:
Method 1: Using Dstack
Follow the Dstack Getting Started Guide to prepare your TEE node using Dstack.
Method 2: Using Cryptopilot
Follow the 0G-TAPP README to set up your TEE node using Cryptopilot.
Download and Configure Inference Broker
To register and manage TEE services, handle user request proxies, and perform settlements, you need to use the Inference Broker.
Please visit the releases page to download and extract the latest version of the installation package. After extracting, use the executable config file to generate the configuration file and docker-compose.yml file according to your setup.
# Download from releases page
tar -xzf inference-broker.tar.gz
cd inference-broker
# Generate configuration files
./config
Support for additional verification methods including:
- OPML: Optimistic Machine Learning proofs
- ZKML: Zero-knowledge ML verification
Stay tuned for updates.
Launch Provider Broker
Follow the instructions in Dstack or 0G-TAPP documentation to start the service using the config file and docker-compose.yml file generated in the previous step.
The broker will:
- Register your service on the network
- Handle user authentication and request routing
- Manage automatic settlement of payments
Troubleshooting
Broker fails to start
- Verify Docker Compose is installed correctly
- Check port availability
- Ensure config.local.yaml syntax is valid
- Review logs:
docker compose logs
Service not accessible
- Confirm firewall allows incoming connections
- Verify public IP/domain is correct
- Test local service:
curl http://localhost:8000/chat/completions
Settlement issues
The automatic settlement engine handles payments. If issues occur:
- Check wallet has sufficient gas
- Verify network connectivity
- Monitor settlement logs in broker output
Next Steps
- Join Community → Discord for support
- Explore Inference → Inference Documentation for integration details