Zoya.id — AI Research & Education Platform

zoya.id — AI Research & Education Platform

What I’m building

I run a platform called zoya.id (https://zoya-id-beta.vercel.app) — a set of AI tools for students, lecturers, and researchers in Indonesia and Southeast Asia. It’s in open beta right now.

There are three main features, all in Bahasa Indonesia with academic-register output:

  • AI Kuesioner Builder. You give it a research variable (say, “self-efficacy of online learners”) and it produces a full psychometric instrument: operational definition grounded in theory, dimensional structure, item indicators, and actual Likert items with proper reverse-coding. It also has a library of nine pre-validated instruments (GSE, RSES, UTAUT, WHO-5, PHQ-9, Mini-IPIP, MBI, TPB, JSS) complete with APA citations, because most thesis examiners in Indonesia want to see a named scale, not something hallucinated.
  • Statistical Analysis Suite. Fourteen tests so far — descriptive, t-test, ANOVA, regression, mediation, moderation, EFA/CFA, chi-square, normality, reliability — each with one-click execution, chart generation, PDF export, and an AI-generated interpretation written in formal academic Indonesian. For users who still don’t get what the numbers mean, there’s a “Belum Paham?” chat that switches to casual language (think: explaining p-value like you’re telling your cousin). That feature was added directly because of user feedback during beta.
  • AI Assessment Engine. A lecturer uploads a grading rubric and a batch of student answers (Excel / PDF / Word). The platform grades each student against the rubric — per-criterion scores with written reasoning — and exports class-level reports. The use case is grading 100+ essays in under 10 minutes instead of spending a weekend on it.

The problem I actually see

I didn’t build this because of a market report. I built it because of what I keep seeing around me.

Students struggling to process their own research data. Some of them don’t understand statistics at all — they were never taught properly, or they were, but the course was three years ago and now they’re staring at SPSS for the first time in their life trying to finish a thesis. Some can’t afford a laptop and are trying to run an analysis on a borrowed device or their phone. Some have a laptop but SPSS is Rp 9 million per license and there’s no legal way for them to access it. Some know statistics in theory but freeze when they actually see their own dataset because “what test do I even use here.”

It’s not a specialized problem. It’s the default experience. Every semester, every campus. People finishing thesis at 3am stuck on an error message they can’t read.

So the goal isn’t “another AI tool.” The goal is: make this accessible and cheap enough that it stops being a barrier at all. Upload your data in a browser, on a phone if that’s what you have, in your own language, get the analysis and an explanation you actually understand. Pay almost nothing for it, or nothing at all during beta.

Where this is going

The plan is staged, on purpose:

  1. Indonesia first — this is where the pain is closest to me, where I have users, and where “academic register in Bahasa” is a moat nobody else cares about. Get the product tight here first.

  2. APAC next — same problem, different language and academic conventions. Vietnam, Philippines, Thailand, Malaysia all have the same pattern: huge student population, expensive or blocked access to Western tools, underserved local vendors.

  3. Eventually, anywhere — once the core product is solid and localized well for a few markets, the template is clear. Latin America, Africa, anywhere the “US-centric AI at US prices” gap is real.

And I don’t see this as a feature-freeze product. There’s always a new statistical test users ask for, a new AI feature that makes the experience lighter, a new export format some professor wants. I treat zoya.id as a living platform — shipped 19 releases during beta already, mostly from direct user feedback, and that cadence isn’t slowing down.

This is the part where 0G becomes relevant, not just technically but strategically. Academic institutions handle sensitive data — clinical surveys, HR research, mental health studies — and routing all of that through OpenAI or Anthropic servers isn’t a story any university compliance office is comfortable with. If I’m serious about going beyond Indonesia into markets like Vietnam, India, or China where data sovereignty is a hard requirement, “runs on decentralized infrastructure with verifiable compute” stops being a nice-to-have and becomes the only viable architecture.

How I want to integrate 0G

The honest version: I’m a solo Web2 dev. I’ve shipped a production AI platform but I haven’t deployed a contract on a new chain before. So the plan below is sequenced deliberately — start with the cheapest integration to learn the stack, then expand as I gain confidence.

Phase 1 — Research Artifact Registry (weeks 1–3)

Deploy a simple smart contract on 0G Chain that maps a user’s wallet or Agent ID to a list of artifact CIDs. When a user finishes a statistical analysis or generates a questionnaire, they can opt to “mint” it to 0G — the result JSON gets uploaded to 0G Storage, and the CID gets registered on-chain with a timestamp and artifact-type tag. This gives researchers a permanent, verifiable record of their work that survives platform shutdowns, account deletions, or zoya.id itself going away. Thesis examiners can verify the artifact exists and hasn’t been altered.

Phase 2 — 0G Compute for inference (weeks 4–6)

Start with the smallest and lowest-risk endpoint, /api/explain-chat — the “Belum Paham?” feature. It’s stateless, low-token, and users already have low latency expectations from chat UX, so it’s a good place to benchmark 0G Compute against OpenRouter/Groq. If latency and quality are acceptable, expand to /api/interpret-stats and /api/generate-kuesioner, which are the higher-value endpoints.

Phase 3 — Agent ID and institutional pilots (weeks 7–10)

Bind each researcher to an Agent ID rather than a raw wallet address — better recoverability, better UX, and it sets up the story for institutional accounts (a university gets its own Agent ID umbrella, under which its researchers have sub-identities). Start one TEE-based inference pilot with an institutional partner doing sensitive-data research (mental health survey, HR assessment) where confidentiality is non-negotiable — this is where decentralized + TEE compute has a story no centralized provider can match.

Web3 onboarding stays opt-in. The existing Supabase email/Google auth stays as default — most academic users will never care about wallets, and forcing them to is the fastest way to kill adoption. Wallet connection unlocks the 0G features (minting, Agent ID) for users who want them.

Where the project is now

Production-grade stack already in place: React frontend on Vercel, Supabase for auth + RLS, multi-provider LLM fallback (OpenRouter, Groq, Kimi), real PDF export pipeline, 14+ statistical tests with charts, beta-free login enforcement, order tracking, AI-generated interpretations in Indonesian, and the “Belum Paham?” chat with server-side rate limiting. I’ve shipped 19+ releases during beta, most of them driven by direct user feedback.

The product isn’t the bottleneck. The bottlenecks are (a) infrastructure cost as I scale inference, and (b) strategic positioning — “yet another GPT wrapper” is a losing frame in 2026, and “sovereign AI for APAC academia on decentralized infrastructure” is a much better one, both for users and for long-term defensibility.

I’m building this solo for now. Part of the reason I’m applying to Guild is to get the credibility, credits, and ecosystem access that lets me bring in a technical co-founder on the Web3 side — someone who can own the on-chain architecture while I continue driving product and user-facing AI.

Links

  • Live demo: https://zoya-id-beta.vercel.app

  • GitHub: Private — happy to share access with reviewers on request

  • Twitter/X: not live yet, will be set up before formal review

  • Discord: zaaaxx

  • Telegram: @zaaaxx1432

  • Email: zaaaxx1432@gmail.com

Why 0G specifically

I looked at several “AI × crypto” stacks. Most are either pure infrastructure plays with no real applications, or applications that use crypto as a gimmick (sign-in-with-wallet and nothing else). 0G is the first one where the component list — Storage, Compute, Chain, Agent ID, TEE — actually maps cleanly onto real problems I already have in production: where do I permanently archive research artifacts, how do I reduce inference cost without sacrificing quality, and how do I give institutional users a confidentiality guarantee that centralized LLMs can’t.

Also, the APAC focus of Guild matches my market. That’s not nothing.

Thanks for reading. Happy to hop on a call, share the full demo, or walk through the codebase.