Send a URL and get back the page content. Scrapely handles JS rendering and anti-bot automatically.
curl -X POST https://api.scrapely.io/v2/tasks/create \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"crawler": {
"websiteURL": "https://example.com",
"return_page_source": true
}
}'
Request fields
| Field | Type | Required | Default | Description |
|---|
websiteURL | string | Yes | — | URL to crawl. Must start with http:// or https://. Max 254 chars. |
return_page_source | boolean | No | false | Return the full HTML source of the page. |
return_page_text | boolean | No | false | Return the page content as plain text. |
return_page_cookies | boolean | No | false | Return the page cookies after load. |
return_page_meta | boolean | No | false | Return the page meta tags. |
return_user_agent | boolean | No | false | Return the user agent used during the crawl. |
block_resources | boolean | No | false | Block images, fonts, and stylesheets to speed up the crawl. |
device | string | No | "desktop" | Device to emulate. Either "desktop" or "mobile". |
return_page_source and return_page_text cannot be used together in the same request.
proxy (optional)
| Field | Type | Required | Description |
|---|
scheme | string | Yes | Proxy scheme (e.g. http, socks5). |
host | string | Yes | Proxy host. |
port | integer | Yes | Proxy port. |
username | string | No | Proxy username. |
password | string | No | Required if username is provided. |
options (optional)
| Field | Type | Description |
|---|
user_agent | string | Custom user agent string. Min 10, max 500 chars. |
Response
{
"success": true,
"task_id": "52989a12-a43c-4bf9-ba1d-8ab1e1509169",
"status": "completed",
"created_at": "2026-04-06T10:54:56.652354+00:00",
"result": {
"html": "<!DOCTYPE html>...",
"text": "",
"screenshot": "",
"user_agent": "",
"cookies": {},
"metadata": {},
"instructions": []
},
"completed_at": "2026-04-06T10:55:08.312452+00:00"
}