Project Name: DeHug
Grant Amount Requested
$30,000 (Total)
Project Overview
DeHug is a decentralized AI model hosting and monetization platform that enables developers to upload, version, deploy, and monetize AI models with cryptographic provenance and verifiable usage.
DeHug is designed to be Hugging Face–compatible, allowing developers to use familiar workflows while removing centralized custody over model artifacts, inference execution, and usage accounting. Models on DeHug are treated as verifiable digital assets with transparent ownership and auditable usage.
DeHug currently uses Base as its coordination layer. However, Base lacks native primitives for decentralized AI storage, compute, and data availability at scale. This grant supports the migration of DeHug’s core infrastructure to 0G, enabling production-grade decentralized AI workloads.
Problem Statement
AI model hosting today is centralized and opaque:
-
Developers lack verifiable proof that their models are hosted and executed as declared
-
Inference usage and revenue reporting is fully custodial
-
Existing L2s are not optimized for high-throughput AI storage and compute
-
There is no trust-minimized infrastructure for decentralized AI serving
These limitations prevent DeHug from scaling into a production-grade decentralized AI platform.
Solution
DeHug integrates 0G Storage, 0G Compute, and 0G Data Availability to provide:
-
Verifiable AI model hosting
-
Decentralized inference execution
-
Transparent, auditable usage metering
-
Non-custodial creator monetization
0G acts as the decentralized AI infrastructure layer that enables DeHug to move from coordination to execution.
How DeHug Integrates with 0G
0G Storage – Model Hosting
-
AI model artifacts (weights, configs, metadata) are stored using 0G Storage
-
Cryptographic hashes of model artifacts are anchored for integrity verification
-
Enables independent verification of model versions
0G Compute – Inference Execution
-
DeHug inference workers run models using 0G Compute
-
Real AI workloads (LLMs, vision, NLP) are executed on decentralized infrastructure
-
Each inference produces a verifiable execution receipt
0G Data Availability – Usage Proofs
-
Inference receipts and usage records are published to 0G DA
-
Enables transparent usage accounting without exposing sensitive inputs or outputs
-
Forms the basis for non-custodial creator payments
Milestones & Funding Breakdown (Total: $30,000)
M1: Core Migration to 0G
Amount: $10,000
Timeline: Month 1
Deliverables:
-
Integrate 0G Storage for AI model artifacts
-
Migrate DeHug’s model registry from Base to 0G-backed storage pointers
-
Update DeHug SDK for 0G-native model uploads
-
End-to-end pipeline:
Upload → Store → Verify
M2: Inference & Compute Integration
Amount: $10,000
Timeline: Month 2
Deliverables:
-
Deploy DeHug inference workers on 0G Compute
-
Enable pay-per-inference execution
-
Publish inference execution receipts to 0G DA
-
Load testing with real models and inference traffic
M3: Usage Metering & Creator Monetization
Amount: $10,000
Timeline: Month 3
Deliverables:
-
Usage metering based on 0G-backed inference receipts
-
Verifiable model usage records
-
Creator revenue accounting tied to inference execution
-
Public demo: Upload → Run → Pay → Verify
Expected Impact on the 0G Network
-
Continuous 0G Storage usage from AI model hosting
-
Real 0G Compute demand from inference workloads
-
Persistent 0G DA traffic from usage and execution receipts
-
DeHug becomes a developer-facing demand on-ramp to 0G
Each deployed model and each inference contributes directly to network usage and node operator revenue.
Why Now
DeHug has a working platform and active development but requires infrastructure optimized for decentralized AI. 0G provides the missing primitives needed to move DeHug into production while bringing immediate, real-world storage and compute demand to the network.
Repository
GitHub: GitHub - Timi16/dehug: Decentralized hugging Face Ui Sdk And Playground