1) Project overview and objectives
What we’re building: TradeOS lets users “vibe-code” trading decision workflows (assets/timeframes/indicators/confirmation + risk rules) in natural language. On 0G, we’ll turn these workflows into verifiable, composable decision agents, where every signal is shipped with provenance (agent spec hash, model/version, data snapshot hash, execution logs) so other apps/protocols can trust and reuse them.
Traction:
-
~70,000 MAU
-
23.5% D7 / 18% D30 retention
-
16,500 pro-created agents
Objectives (Guild MVP):
-
Ship a testnet verifiable signal pipeline (spec → inference → onchain anchoring).
-
Launch a strategy template library (copy/fork/run) backed by verifiable metadata.
-
Provide an API + demo integration for an ecosystem app to consume signals + proofs.
2) Technical architecture and implementation plan
Lean architecture
-
TradeOS App: Strategy builder outputs an Agent Spec (JSON) + schedule/risk constraints.
-
Orchestrator (runtime): Builds context (market data snapshot + features) and dispatches inference tasks.
-
0G Compute (Inference): Runs LLM/domain inference and returns structured outputs (signal, confidence, rationale, self-checks).
-
0G Storage: Stores agent specs, templates, logs, backtest summaries; anchors content hashes for auditability.
-
0G Chain (EVM): Minimal contracts:
-
Agent Registry (spec hash, creator, versioning)
-
Result Anchoring (result hash + references to Storage/DA)
-
-
0G DA (optional): Batch-publish hourly/daily signal snapshots for scalable availability.
8-week plan
-
W1–3: Contracts + Storage integration (templates/specs/logs).
-
W4–5: Compute inference integration; structured outputs + provenance.
-
W6–8: Public demo + API; reference integration (or 1 ecosystem pilot); monitoring/rate limits.
3) How we’ll integrate with 0G infrastructure
-
0G Compute: Execute decision agents and return structured outputs with verifiable metadata.
-
0G Storage: Persist agent specs/templates/audit logs; keep hashes as canonical verification anchors.
-
0G Chain: Anchor agent + result hashes onchain so third parties can verify what generated each signal and on which snapshot.
-
0G DA (if scope fits): Publish batched signal datasets for broad distribution and availability.
4) Team background and experience
-
10yrs+ global product veteran from Bigtechs and Leading Startups.
-
Solid AI skills, mastered AI pre-training and feature engineering from EB-level data lake
-
Leadership across on-chain AI data infrastructure and on-chain exchange infrastructure—built for real ecosystems, not demos.
-
Deep background in global payment systems and risk/compliance AI, with production reliability + security mindset.
5) Funding requirements and milestones
Requested grant: $30k (plus gas credits / technical review support if available).
Use of funds (lean):
-
70% Engineering (Compute/Storage integration, contracts, API)
-
15% Security & QA (contract review, testing, monitoring)
-
15% Infra & data pipelines
Milestones
-
M1 (Week 3): Agent Registry + Result Anchoring contracts on testnet; Storage-backed template/spec publishing.
-
M2 (Week 5): End-to-end verifiable inference pipeline on 0G Compute; signals + provenance metadata.
-
M3 (Week 8): Public demo + API; reference integration repo (or 1 ecosystem pilot); initial verified template library.