Skip to main content
POST
/
web
/
crawl
JavaScript
import ContextDev from 'context.dev';

const client = new ContextDev({
  apiKey: process.env['CONTEXT_DEV_API_KEY'], // This is the default and can be omitted
});

const response = await client.web.webCrawlMd({ url: 'https://example.com' });

console.log(response.metadata);
{
  "results": [
    {
      "markdown": "<string>",
      "metadata": {
        "url": "<string>",
        "title": "<string>",
        "crawlDepth": 123,
        "statusCode": 123,
        "success": true
      }
    }
  ],
  "metadata": {
    "numUrls": 123,
    "maxCrawlDepth": 123,
    "numSucceeded": 123,
    "numFailed": 123
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
url
string<uri>
required

The starting URL for the crawl (must include http:// or https:// protocol)

maxPages
integer
default:100

Maximum number of pages to crawl. Hard cap: 500.

Required range: 1 <= x <= 500
maxDepth
integer

Maximum link depth from the starting URL (0 = only the starting page)

Required range: x >= 0
urlRegex
string

Regex pattern. Only URLs matching this pattern will be followed and scraped.

Preserve hyperlinks in the Markdown output

includeImages
boolean
default:false

Include image references in the Markdown output

shortenBase64Images
boolean
default:true

Truncate base64-encoded image data in the Markdown output

useMainContentOnly
boolean
default:false

Extract only the main content, stripping headers, footers, sidebars, and navigation

followSubdomains
boolean
default:false

When true, follow links on subdomains of the starting URL's domain (e.g. docs.example.com when starting from example.com). www and apex are always treated as equivalent.

Response

Successful response

results
object[]
required
metadata
object
required