LinkMeta API Documentation
LinkMeta provides a fast, reliable REST API for extracting structured metadata from any URL. Extract Open Graph data, Twitter Cards, favicons, JSON-LD, and more with a single API call. The API is completely free and open - no API keys, no authentication, no rate limits.
http://localhost:3000/api/v1 (development) or https://linkmeta.dev/api/v1 (production)Extract Metadata
Extract metadata from a single URL. Returns Open Graph, Twitter Card, favicon, JSON-LD, and general metadata. Supports both GET (query params) and POST (JSON body).
Parameters
| Param | Type | Required | Description |
|---|---|---|---|
url | string | required | The URL to extract metadata from |
no_cache | boolean | optional | Set to true to bypass cache |
fields | string | optional | Comma-separated list of fields to return (e.g. title,image,statusCode,contentType). Returns all fields if omitted. Opt-in fields (excluded by default): body (page text), favicons (all discovered favicons with sizes). Available: url, statusCode, contentType, title, description, image, favicon, favicons, siteName, type, locale, language, author, publishedDate, modifiedDate, keywords, themeColor, canonical, wordCount, summary, redirectChain, twitter, openGraph, jsonLd, responseTime |
timeout | integer | optional | Request timeout in milliseconds (default 10000, max 30000). Increase for slow-responding targets. |
Example Request
curl "https://linkmeta.dev/api/v1/extract?url=https://github.com"
# With field selection (only return title and image):
curl "https://linkmeta.dev/api/v1/extract?url=https://github.com&fields=title,image"
# POST method:
curl -X POST "https://linkmeta.dev/api/v1/extract" \
-H "Content-Type: application/json" \
-d '{"url": "https://github.com", "fields": "title,image,openGraph"}'Example Response
{
"status": "success",
"cached": false,
"data": {
"url": "https://github.com",
"statusCode": 200,
"contentType": "text/html",
"title": "GitHub: Let's build from here",
"description": "GitHub is where people build software...",
"image": "https://github.githubassets.com/assets/...",
"favicon": "https://github.githubassets.com/favicons/...",
"siteName": "GitHub",
"type": "website",
"locale": "en",
"language": "en",
"author": null,
"publishedDate": null,
"keywords": [],
"themeColor": "#1e2327",
"canonical": "https://github.com/",
"wordCount": 1847,
"summary": "GitHub is where over 100 million developers shape the future of software. Contribute to the open source community, manage your Git repositories, and more.",
"redirectChain": [],
"twitter": {
"card": "summary_large_image",
"site": "@github",
"creator": null,
"title": "GitHub: Let's build from here",
"description": "...",
"image": "..."
},
"openGraph": {
"title": "GitHub: Let's build from here",
"description": "...",
"image": "...",
"url": "https://github.com/",
"type": "website",
"siteName": "GitHub",
"locale": "en_US"
},
"jsonLd": null,
"responseTime": 142
}
}Batch Extract
Extract metadata from up to 10 URLs in a single request.
Request Body
{
"urls": [
"https://github.com",
"https://example.com",
"https://dev.to"
],
"fields": "title,image,description"
}The fields parameter is optional and works the same as in the single extract endpoint.
Validate Metadata
Validate Open Graph, Twitter Card, and SEO metadata for any URL. Returns a score (0-100), letter grade (A-F), list of issues with severity levels, and platform-specific readiness for Facebook, Twitter/X, and LinkedIn. Use this as a replacement for Facebook's Sharing Debugger or to audit link preview quality before sharing.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
url | string | Yes | The URL to validate metadata for |
no_cache | boolean | No | Bypass cache and fetch fresh metadata |
timeout | integer | No | Request timeout in ms (default 10000, max 30000) |
Example Response
{
"status": "success",
"cached": false,
"data": {
"url": "https://example.com",
"score": 85,
"grade": "B",
"issues": [
{
"severity": "warning",
"tag": "og:type",
"message": "Missing og:type — defaults to \"website\"",
"platform": "facebook"
}
],
"platforms": {
"facebook": { "ready": true, "issues": [] },
"twitter": { "ready": true, "issues": [] },
"linkedin": { "ready": true, "issues": [] }
},
"summary": {
"errors": 0,
"warnings": 2,
"passed": 14,
"total": 16
},
"metadata": {
"title": "Example Domain",
"description": "...",
"openGraph": { "title": "Example Domain", "..." },
"twitter": { "card": "summary", "..." }
}
}
}SDK Examples
cURL
curl "https://linkmeta.dev/api/v1/extract?url=https://github.com"
JavaScript / Node.js
const response = await fetch(
'https://linkmeta.dev/api/v1/extract?url=https://github.com'
);
const { data } = await response.json();
console.log(data.title, data.description, data.image);Python
import requests
resp = requests.get(
'https://linkmeta.dev/api/v1/extract',
params={'url': 'https://github.com'}
)
data = resp.json()['data']
print(data['title'], data['description'])Batch Request (JavaScript)
const response = await fetch(
'https://linkmeta.dev/api/v1/batch',
{
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
urls: ['https://github.com', 'https://dev.to', 'https://example.com']
})
}
);
const { results } = await response.json();Response Format
All responses are JSON. Successful responses include "status": "success". Error responses include "error" and "message" fields.
Error Handling
| Status | Meaning |
|---|---|
400 | Bad Request - Missing or invalid URL |
422 | Unprocessable - URL returned non-2xx status |
504 | Timeout - URL took too long (default 10s, configurable via timeout param up to 30s) |
MCP Integration
LinkMeta ships with a built-in Model Context Protocol (MCP) server. Connect LinkMeta directly to Claude, VS Code, Cursor, or any MCP-compatible AI client to extract URL metadata from your assistant.
Installation
Add the following to your MCP client configuration:
{
"mcpServers": {
"linkmeta": {
"command": "npx",
"args": ["-y", "linkmeta-api"]
}
}
}Available Tools
| Tool | Description | Parameters |
|---|---|---|
extract_metadata |
Extract metadata from a single URL | url (string, required), no_cache (boolean, optional), fields (string, optional), timeout (integer, optional) |
batch_extract_metadata |
Extract metadata from multiple URLs at once | urls (string array, required, max 10) |
validate_metadata |
Validate OG/Twitter/SEO metadata — returns score, grade, issues, and platform readiness | url (string, required), no_cache (boolean, optional), timeout (integer, optional) |
Environment Variables
| Variable | Default | Description |
|---|---|---|
LINKMETA_BASE_URL |
https://linkmeta.dev |
Override the base URL for API requests (e.g., for self-hosted instances) |