Neo Crawl API

Transform any website into clean, structured data with our blazing-fast web scraping API. Better than Firecrawl, faster than traditional scrapers.

Get Started in Minutes

Transform any website into clean, structured data with just a few API calls. Our service is faster, more reliable, and easier to use than traditional web scraping.

Lightning Fast

Get structured data from any website in milliseconds with our optimized scraping engine.

Enterprise Security

Built with security-first approach. All data is encrypted and API keys are securely managed.

Global Scale

Scrape websites from anywhere in the world with our distributed infrastructure.

Developer Friendly

Simple REST API with comprehensive documentation and multiple response formats.

Three Simple Steps

01

Create Account

Sign up for a free account to get started with Neo Crawl API

POST /auth/register
02

Get API Key

Generate your unique API key for authentication

POST /auth/generate-secret
03

Start Scraping

Make your first API call and get structured data

GET /api/scrapper

Example Usage

1. Register Account

POST /auth/register
json
{
  "username": "your_username",
  "password": "your_password"
}

2. Make API Call

Scrape a Website
bash
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: your_api_key"

3. Get Structured Response

API Response
json
{
  "message": "Success",
  "url": "https://example.com",
  "result1": {
    "title": "Example Domain",
    "headings": ["Welcome to Example"],
    "links": [...]
  },
  "result2": "Clean text content...",
  "result3": "# Markdown formatted content"
}

Authentication

Secure your API access with our robust authentication system. Get your API key in three simple steps.

POST/auth/register

Create Account

Register a new user account to get started with Neo Crawl API.

Parameters

NameTypeRequiredDescription
usernamestringRequiredYour unique username
passwordstringRequiredStrong password (min 8 characters)

Request Example

json
{
  "username": "your_username",
  "password": "your_secure_password"
}

Response Example

json
{
  "msg": "user created",
  "user_id": "12345"
}
POST/auth/login

User Login

Authenticate and receive an access token for API operations.

Parameters

NameTypeRequiredDescription
usernamestringRequiredYour username
passwordstringRequiredYour password

Request Example

text
# Form Data
username=your_username
password=your_secure_password

Response Example

json
{
  "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
  "token_type": "bearer",
  "expires_in": 3600
}
POST/auth/generate-secret

Generate API Key

Generate your secret API key for making scraping requests.

Headers

Authorization:Bearer YOUR_ACCESS_TOKEN

Access token from login

Request Example

bash
curl -X POST "https://api.neocrawl.com/auth/generate-secret" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Response Example

json
{
  "secret_token": "ncl_sk_1234567890abcdef...",
  "created_at": "2025-08-25T10:00:00Z"
}

API Reference

Complete reference for all Neo Crawl API endpoints with detailed examples and response formats.

GET/api/scrapper

Scrape Website

Extract structured data from any website URL. Returns JSON, text, and markdown formats.

Parameters

NameTypeRequiredDescription
urlstringRequiredThe URL to scrape (must be properly encoded)

Headers

x-api-key:YOUR_API_KEY

Your secret API key

Request Example

bash
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: ncl_sk_1234567890abcdef..."

Response Example

json
{
  "message": "Success",
  "url": "https://example.com",
  "result1": {
    "title": "Example Domain",
    "meta_description": "This domain is for use in illustrative examples...",
    "headings": {
      "h1": ["Example Domain"],
      "h2": ["About", "Contact"],
      "h3": []
    },
    "links": [
      {
        "text": "More information...",
        "href": "https://www.iana.org/domains/example",
        "type": "external"
      }
    ],
    "images": [
      {
        "src": "https://example.com/logo.png",
        "alt": "Example Logo",
        "width": 200,
        "height": 100
      }
    ],
    "structured_data": {}
  },
  "result2": "Example Domain\n\nThis domain is for use in illustrative examples in documents...",
  "result3": "# Example Domain\n\nThis domain is for use in illustrative examples..."
}
GET/usage/dashboard

Usage Dashboard

Get detailed analytics and usage statistics for your API consumption.

Headers

Authorization:Bearer YOUR_ACCESS_TOKEN

Access token from login

Request Example

bash
curl -X GET "https://api.neocrawl.com/usage/dashboard" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Response Example

json
{
  "user_id": "12345",
  "plan": 1,
  "monthly_limit": 20,
  "calls_made": 15,
  "calls_remaining": 5,
  "reset_date": "2025-09-01T00:00:00Z",
  "usage_history": [
    {
      "date": "2025-08-25",
      "calls": 3,
      "success_rate": 100
    }
  ]
}

Usage Limits & Pricing

Flexible pricing plans to suit your needs, from hobby projects to enterprise applications.

Free

$0
per month
10 API calls included
  • 10 API calls/month
  • Basic support
  • JSON + Text output
  • Rate limit: 1/sec
Most Popular

Starter

$9
per month
20 API calls included
  • 20 API calls/month
  • Email support
  • All output formats
  • Rate limit: 2/sec

Pro

$29
per month
30 API calls included
  • 30 API calls/month
  • Priority support
  • Advanced features
  • Rate limit: 5/sec

Important Notes

  • • API limits reset monthly on your registration date
  • • Exceeding your limit returns a 403 Forbidden error
  • • Upgrade anytime to increase your monthly quota
  • • Enterprise plans available for higher volume needs

Code Examples

Ready-to-use code examples in your favorite programming language. Copy, paste, and start scraping!

JS
JavaScript / Browser

javascript
const neoCrawl = async (url) => {
  const response = await fetch(`https://api.neocrawl.com/api/scrapper?url=${encodeURIComponent(url)}`, {
    headers: {
      'x-api-key': 'your_api_key'
    }
  });
  
  if (!response.ok) {
    throw new Error(`HTTP error! status: ${response.status}`);
  }
  
  return await response.json();
};

// Usage
try {
  const data = await neoCrawl('https://example.com');
  // Process your scraped data here
} catch (error) {
  console.error('Error:', error);
}

PY
Python

python
import requests

class NeoCrawl:
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://api.neocrawl.com"
    
    def scrape(self, url):
        headers = {"x-api-key": self.api_key}
        params = {"url": url}
        
        response = requests.get(
            f"{self.base_url}/api/scrapper",
            headers=headers,
            params=params
        )
        
        if response.status_code == 200:
            return response.json()
        else:
            response.raise_for_status()

# Usage
crawler = NeoCrawl("your_api_key")
result = crawler.scrape("https://example.com")
print(result["result1"])

JS
Node.js

javascript
const axios = require('axios');

class NeoCrawl {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.baseURL = 'https://api.neocrawl.com';
  }

  async scrape(url) {
    try {
      const response = await axios.get(`${this.baseURL}/api/scrapper`, {
        params: { url },
        headers: { 'x-api-key': this.apiKey }
      });
      
      return response.data;
    } catch (error) {
      throw new Error(`Neo Crawl API Error: ${error.response?.data?.message || error.message}`);
    }
  }
}

// Usage
const crawler = new NeoCrawl('your_api_key');
crawler.scrape('https://example.com')
  .then(data => {
    // Process your data here
  })
  .catch(error => console.error(error));

Installation Guide

Get up and running with Neo Crawl in your favorite programming environment.

No Installation Required!

Neo Crawl is a REST API service - no installation needed. Just get your API key and start making requests.

Works with any programming language
No dependencies to manage
Always up-to-date

SDKs & Wrappers

While not required, we provide helpful code examples and wrapper functions for popular languages.

JavaScript/TypeScript
Python
Node.js
cURL examples

Your First API Call

Let's make your first successful API call step by step.

Step-by-Step Tutorial

1

Get Your API Key

After registration, generate your secret API key from the dashboard.

2

Choose a URL to Scrape

Start with a simple website like https://example.com

3

Make the Request

Send a GET request with your API key in the header.

Your First API Call
bash
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: YOUR_API_KEY"

API Key Management

Learn how to securely manage and use your API keys.

Security Best Practices

  • • Never expose your API key in client-side code
  • • Use environment variables to store keys
  • • Regenerate keys if compromised
  • • Monitor usage for unusual activity

Key Features

  • • Generate unlimited API keys
  • • Real-time usage tracking
  • • Instant key regeneration
  • • Detailed analytics dashboard
Environment Variable Setup
bash
# Store your API key securely
export NEO_CRAWL_API_KEY="ncl_sk_1234567890abcdef..."

# Use in your requests
curl -H "x-api-key: $NEO_CRAWL_API_KEY" \
  "https://api.neocrawl.com/api/scrapper?url=https://example.com"

Account Registration

Create your Neo Crawl account to start scraping websites.

POST/auth/register

Create New Account

Register a new user account with username and password.

Parameters

NameTypeRequiredDescription
usernamestringRequiredUnique username (3-50 characters)
passwordstringRequiredStrong password (minimum 8 characters)

Request Example

json
{
  "username": "your_username",
  "password": "your_secure_password"
}

Response Example

json
{
  "msg": "user created",
  "user_id": "12345"
}

User Login

Authenticate to access protected endpoints and manage your account.

POST/auth/login

User Authentication

Login with your credentials to receive an access token.

Parameters

NameTypeRequiredDescription
usernamestringRequiredYour registered username
passwordstringRequiredYour account password

Request Example

text
# Form Data
username=your_username
password=your_secure_password

Response Example

json
{
  "access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
  "token_type": "bearer",
  "expires_in": 3600
}

Scraper Endpoint

The main endpoint for extracting structured data from any website URL.

GET/api/scrapper

Scrape Website Data

Extract structured data from any website URL. Returns JSON, text, and markdown formats.

Parameters

NameTypeRequiredDescription
urlstringRequiredThe URL to scrape (must be properly encoded)

Headers

x-api-key:YOUR_API_KEY

Your secret API key

Request Example

bash
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: ncl_sk_1234567890abcdef..."

Response Example

json
{
  "message": "Success",
  "url": "https://example.com",
  "result1": {
    "title": "Example Domain",
    "meta_description": "This domain is for use in illustrative examples...",
    "headings": {
      "h1": ["Example Domain"],
      "h2": ["About", "Contact"],
      "h3": []
    },
    "links": [
      {
        "text": "More information...",
        "href": "https://www.iana.org/domains/example",
        "type": "external"
      }
    ],
    "images": [
      {
        "src": "https://example.com/logo.png",
        "alt": "Example Logo",
        "width": 200,
        "height": 100
      }
    ],
    "structured_data": {}
  },
  "result2": "Example Domain\n\nThis domain is for use in illustrative examples in documents...",
  "result3": "# Example Domain\n\nThis domain is for use in illustrative examples..."
}

Rate Limits

Understanding rate limits and how to optimize your API usage.

Free

Rate Limit:1 req/sec
Burst:5 requests
Monthly:10 calls

Starter

Rate Limit:2 req/sec
Burst:10 requests
Monthly:20 calls

Pro

Rate Limit:5 req/sec
Burst:25 requests
Monthly:30 calls

Rate Limit Headers

Every API response includes rate limit information in the headers:

text
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1640995200
X-RateLimit-Retry-After: 3600

Response Formats

Neo Crawl provides multiple response formats to suit your needs.

result1 (JSON)

Structured data with metadata, links, images, and more.

json
{
  "title": "Page Title",
  "meta_description": "...",
  "headings": {...},
  "links": [...],
  "images": [...]
}

result2 (Text)

Clean, readable text content without HTML tags.

text
Page Title

This is the main content
of the webpage in clean
text format...

result3 (Markdown)

Formatted markdown ready for documentation or blogs.

markdown
# Page Title

This is the main **content**
of the webpage in clean
[markdown](link) format...

Pricing Plans

Choose the plan that fits your scraping needs.

Free

$0
per month
10 API calls included
  • 10 API calls/month
  • Basic support
  • JSON + Text output
  • Rate limit: 1/sec
Most Popular

Starter

$9
per month
20 API calls included
  • 20 API calls/month
  • Email support
  • All output formats
  • Rate limit: 2/sec

Pro

$29
per month
30 API calls included
  • 30 API calls/month
  • Priority support
  • Advanced features
  • Rate limit: 5/sec

Usage Dashboard

Monitor your API usage and track analytics in real-time.

GET/usage/dashboard

Get Usage Statistics

Retrieve comprehensive usage analytics and account information.

Headers

Authorization:Bearer YOUR_ACCESS_TOKEN

Access token from login

Request Example

bash
curl -X GET "https://api.neocrawl.com/usage/dashboard" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN"

Response Example

json
{
  "user_id": "12345",
  "plan": 1,
  "monthly_limit": 20,
  "calls_made": 15,
  "calls_remaining": 5,
  "reset_date": "2025-09-01T00:00:00Z",
  "usage_history": [
    {
      "date": "2025-08-25",
      "calls": 3,
      "success_rate": 100
    }
  ]
}

Analytics & Insights

Gain insights into your scraping patterns and optimize your usage.

Available Metrics

Total API calls made
Success rate percentage
Average response time
Most scraped domains
Daily/weekly/monthly trends

Optimization Tips

Batch Requests

Group similar URLs to maximize efficiency

Cache Results

Store frequently accessed data locally

Monitor Limits

Track usage to avoid hitting rate limits

cURL Examples

Ready-to-use cURL commands for testing and automation.

Basic Scraping

bash
# Simple scrape request
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json"

With Error Handling

bash
# Scrape with error handling and output formatting
curl -X GET "https://api.neocrawl.com/api/scrapper?url=https://example.com" \
  -H "x-api-key: YOUR_API_KEY" \
  -w "HTTP Status: %{http_code}\nResponse Time: %{time_total}s\n" \
  -s -S \
  | jq '.'

Batch Processing

bash
# Process multiple URLs
urls=("https://example.com" "https://google.com" "https://github.com")

for url in "${urls[@]}"; do
  echo "Scraping: $url"
  curl -X GET "https://api.neocrawl.com/api/scrapper?url=$url" \
    -H "x-api-key: YOUR_API_KEY" \
    -s | jq '.result1.title'
  sleep 1  # Respect rate limits
done

JS
JavaScript Examples

Ready-to-use JavaScript code for browser and Node.js environments.

javascript
const neoCrawl = async (url) => {
  const response = await fetch(`https://api.neocrawl.com/api/scrapper?url=${encodeURIComponent(url)}`, {
    headers: {
      'x-api-key': 'your_api_key'
    }
  });
  
  if (!response.ok) {
    throw new Error(`HTTP error! status: ${response.status}`);
  }
  
  return await response.json();
};

// Usage
try {
  const data = await neoCrawl('https://example.com');
  // Process your scraped data here
} catch (error) {
  console.error('Error:', error);
}

PY
Python Examples

Python class and examples for easy integration.

python
import requests

class NeoCrawl:
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = "https://api.neocrawl.com"
    
    def scrape(self, url):
        headers = {"x-api-key": self.api_key}
        params = {"url": url}
        
        response = requests.get(
            f"{self.base_url}/api/scrapper",
            headers=headers,
            params=params
        )
        
        if response.status_code == 200:
            return response.json()
        else:
            response.raise_for_status()

# Usage
crawler = NeoCrawl("your_api_key")
result = crawler.scrape("https://example.com")
print(result["result1"])

JS
Node.js Examples

Server-side JavaScript implementation with axios.

javascript
const axios = require('axios');

class NeoCrawl {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.baseURL = 'https://api.neocrawl.com';
  }

  async scrape(url) {
    try {
      const response = await axios.get(`${this.baseURL}/api/scrapper`, {
        params: { url },
        headers: { 'x-api-key': this.apiKey }
      });
      
      return response.data;
    } catch (error) {
      throw new Error(`Neo Crawl API Error: ${error.response?.data?.message || error.message}`);
    }
  }
}

// Usage
const crawler = new NeoCrawl('your_api_key');
crawler.scrape('https://example.com')
  .then(data => {
    // Process your data here
  })
  .catch(error => console.error(error));

Ready to Start Scraping?

Join thousands of developers who trust Neo Crawl for their web scraping needs. Get started in minutes, not hours.