5 min read

Quickstart

From zero to structured data in under 5 minutes. This guide walks you through creating an API key, submitting your first crawl, and fetching the results.

01

Create an API key

Sign up for a free account — no credit card required. Then navigate to Dashboard → API Keys and click Create key. Copy the full key immediately — it's only shown once.

Your API key format
cai_sk_••••••••••••••••••••••••••••••••
Create your free account
02

Submit a crawl

Send a POST request to /api/crawl with the target URL and your desired output format.

curlPOST /api/crawl
curl -X POST https://webextract.mabai.tech/api/crawl \
  -H "Authorization: Bearer cai_sk_••••" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://docs.example.com",
    "limit": 10,
    "formats": ["markdown"]
  }'
Response201 Created
{
  "success": true,
  "jobId": "a1b2c3d4-…",
  "cfJobId": "cf_job_••••"
}
03

Wait for completion

Crawl jobs are asynchronous. Poll the job endpoint until status is completed, or use a webhook to get notified automatically.

Poll job statusGET /api/crawl/:id
curl https://webextract.mabai.tech/api/crawl/a1b2c3d4 \
  -H "Authorization: Bearer cai_sk_••••"

Prefer webhooks over polling — see the Webhooks guide

04

Read your data

Once the job is completed, fetch the stored pages. Each page includes its URL, Markdown content, raw HTML, and metadata — all persisted permanently in your account.

Page response shape
{
  "url": "https://docs.example.com/intro",
  "status": "completed",
  "markdown": "# Introduction\n\nThis guide covers…",
  "metadata": { "title": "Introduction", "status": 200 }
}

You're all set

Your crawled pages are stored permanently in your account. Fetch them anytime, export them, or pipe them into your LLM pipeline.

Next steps