Quickstart
From zero to structured data in under 5 minutes. This guide walks you through creating an API key, submitting your first crawl, and fetching the results.
Create an API key
Sign up for a free account — no credit card required. Then navigate to Dashboard → API Keys and click Create key. Copy the full key immediately — it's only shown once.
cai_sk_••••••••••••••••••••••••••••••••Submit a crawl
Send a POST request to /api/crawl with the target URL and your desired output format.
curl -X POST https://webextract.mabai.tech/api/crawl \
-H "Authorization: Bearer cai_sk_••••" \
-H "Content-Type: application/json" \
-d '{
"url": "https://docs.example.com",
"limit": 10,
"formats": ["markdown"]
}'{
"success": true,
"jobId": "a1b2c3d4-…",
"cfJobId": "cf_job_••••"
}Wait for completion
Crawl jobs are asynchronous. Poll the job endpoint until status is completed, or use a webhook to get notified automatically.
curl https://webextract.mabai.tech/api/crawl/a1b2c3d4 \
-H "Authorization: Bearer cai_sk_••••"◆Prefer webhooks over polling — see the Webhooks guide
Read your data
Once the job is completed, fetch the stored pages. Each page includes its URL, Markdown content, raw HTML, and metadata — all persisted permanently in your account.
{
"url": "https://docs.example.com/intro",
"status": "completed",
"markdown": "# Introduction\n\nThis guide covers…",
"metadata": { "title": "Introduction", "status": 200 }
}You're all set
Your crawled pages are stored permanently in your account. Fetch them anytime, export them, or pipe them into your LLM pipeline.
Next steps