Free URL Metadata API
A URL metadata API lets you send any web address and receive structured information about that page in return: its title, description, Open Graph tags, Twitter Card data, favicons, canonical URL, JSON-LD structured data, and more. Instead of writing and maintaining your own HTML parser, you delegate the heavy lifting to a dedicated service that handles redirects, JavaScript rendering, charset detection, and edge cases across millions of different websites.
Developers use URL metadata APIs to power link previews in chat applications, validate social sharing cards before publishing, audit SEO tags at scale, aggregate content from RSS feeds, and enrich bookmarking tools with rich thumbnails and descriptions. The challenge has always been finding a service that is free, reliable, and requires no authentication overhead. This guide compares the three most popular options available today and shows you exactly how to integrate each one.
Why You Need a URL Metadata API
Building a metadata extraction system from scratch means dealing with an enormous surface area of problems. You need to handle HTTP redirects (301, 302, 307, 308), detect and convert character encodings, parse malformed HTML, execute JavaScript on pages that render client-side, respect robots.txt directives, manage request timeouts, and keep up with constantly evolving Open Graph and Twitter Card specifications. A purpose-built API eliminates all of that complexity.
Here are the four most common use cases that drive adoption of URL metadata APIs:
Link Previews
Chat applications, social platforms, and messaging tools need to display rich link previews when users share URLs. A metadata API extracts the title, description, and image so you can render a preview card instantly without loading the full page in the user's browser.
SEO Auditing
SEO professionals need to verify that every page on a site has correct meta titles, descriptions, canonical URLs, and Open Graph tags. Batch extraction through an API enables auditing hundreds of pages in minutes rather than checking each one manually in a browser.
Social Share Validation
Before publishing content, marketing teams need to confirm that shared links will display the correct image, title, and description on Facebook, Twitter, LinkedIn, and Slack. A metadata API shows exactly what each platform will see.
Content Aggregation
News readers, bookmarking apps, and research tools need to display article metadata alongside links. Extracting titles, authors, publication dates, and thumbnails from arbitrary URLs turns raw links into browsable, organized content.
LinkMeta vs Microlink vs open-graph-scraper
Three tools dominate the URL metadata extraction space, but they serve very different needs. LinkMeta and Microlink are hosted REST APIs that you call over HTTP. The open-graph-scraper package is a Node.js library that runs inside your own application. The table below breaks down the key differences across pricing, features, authentication, and deployment model.
| Feature | LinkMeta | Microlink | open-graph-scraper |
|---|---|---|---|
| Type | Hosted REST API | Hosted REST API | Node.js library |
| Price | Free forever | Free tier (50 req/day), paid plans from $15.99/mo | Free (open source) |
| API Key Required | No | No (free tier), Yes (paid) | N/A (library) |
| Open Graph Extraction | Yes | Yes | Yes |
| Twitter Card Extraction | Yes | Yes | Yes |
| JSON-LD / Structured Data | Yes (full extraction) | Limited | No |
| Favicon Detection | Yes (multiple sizes) | Yes (via logo field) | No |
| Batch Extraction | Yes (up to 10 URLs) | No | No (manual loop) |
| Hosting Required | No | No | Yes (your own server) |
| JavaScript Rendering | Server-side | Server-side (Chromium) | No (HTML only) |
| Rate Limiting | Generous (IP-based) | 50 req/day free | None (self-hosted) |
| Response Format | JSON | JSON | JavaScript object |
| MCP Integration | Yes (official MCP server) | No | No |
LinkMeta stands out for teams that want zero-friction integration. There are no API keys to manage, no payment tiers to navigate, and no servers to provision. You send a GET request with a URL parameter and receive a comprehensive JSON response containing Open Graph tags, Twitter Cards, favicons in multiple sizes, JSON-LD structured data, and standard HTML metadata. The batch endpoint lets you extract metadata from up to 10 URLs in a single request, which is invaluable for dashboards and content management systems.
Microlink is a capable alternative with a polished developer experience, but the free tier is limited to 50 requests per day. For production workloads, you will need a paid plan starting at $15.99 per month. It excels at screenshot capture and PDF generation alongside metadata extraction, making it a broader toolkit at a higher price point.
open-graph-scraper is the right choice when you need complete control over the extraction process and are comfortable running your own infrastructure. As a Node.js library, it runs inside your application with no external API calls. The tradeoff is that you must handle HTTP requests, error recovery, rate limiting, and infrastructure scaling yourself. It also lacks JSON-LD extraction and favicon detection out of the box.
How to Use LinkMeta API
LinkMeta exposes a single extraction endpoint at GET /api/v1/extract. Pass any URL as a query parameter and receive structured metadata in JSON format. No authentication headers, no API keys, no signup process. The examples below show how to call the API from three popular environments.
curl -s "https://linkmeta.dev/api/v1/extract?url=https://github.com" | jq .
The response includes every piece of metadata the target page exposes. Here is a typical response structure:
{
"success": true,
"metadata": {
"title": "GitHub: Let's build from here",
"description": "GitHub is where over 100 million developers shape the future of software.",
"canonical": "https://github.com/",
"favicon": "https://github.githubassets.com/favicons/favicon.svg",
"openGraph": {
"og:title": "GitHub: Let's build from here",
"og:description": "GitHub is where over 100 million developers shape the future...",
"og:image": "https://github.githubassets.com/images/modules/site/social-cards/github-social.png",
"og:url": "https://github.com/",
"og:type": "website",
"og:site_name": "GitHub"
},
"twitterCard": {
"twitter:card": "summary_large_image",
"twitter:site": "@github",
"twitter:title": "GitHub: Let's build from here"
},
"jsonLd": [ ... ],
"favicons": [
{ "href": "https://github.githubassets.com/favicons/favicon.svg", "type": "image/svg+xml" },
{ "href": "https://github.githubassets.com/favicons/favicon.png", "sizes": "512x512" }
]
}
}
For batch extraction, use the POST /api/v1/extract/batch endpoint to process up to 10 URLs in a single request:
curl -X POST "https://linkmeta.dev/api/v1/extract/batch" \
-H "Content-Type: application/json" \
-d '{
"urls": [
"https://github.com",
"https://stackoverflow.com",
"https://developer.mozilla.org"
]
}'
What Data Can You Extract?
LinkMeta's extraction engine parses every metadata source embedded in a web page. Here is the complete list of data fields available in the API response:
Standard HTML Metadata
- • Page title (
<title>) - • Meta description
- • Canonical URL
- • Language / locale
- • Meta author
- • Meta keywords
- • Charset encoding
Open Graph Tags
- • og:title
- • og:description
- • og:image (+ dimensions)
- • og:url
- • og:type
- • og:site_name
- • og:locale
Twitter Card Tags
- • twitter:card
- • twitter:title
- • twitter:description
- • twitter:image
- • twitter:site
- • twitter:creator
Additional Data
- • JSON-LD structured data
- • Favicons (all sizes)
- • Apple touch icons
- • Theme color
- • RSS/Atom feed URLs
- • HTTP status code
- • Response time (ms)
Frequently Asked Questions
Is LinkMeta really free? What are the limits?
Yes, LinkMeta is completely free with no paid tiers. There are no API keys, no signup, and no credit card required. The API uses IP-based rate limiting to ensure fair usage across all users. For most applications, the default rate limits are more than sufficient. If you are building a high-volume production application, consider caching responses on your side to reduce redundant requests.
How does LinkMeta compare to building my own scraper?
Building a custom scraper requires handling dozens of edge cases: character encoding detection, redirect chains, JavaScript-rendered pages, malformed HTML, timeout management, and keeping up with evolving Open Graph and Twitter Card specifications. LinkMeta handles all of this out of the box. A single API call replaces hundreds of lines of parsing code and eliminates the ongoing maintenance burden of keeping your scraper compatible with the ever-changing web.
Does LinkMeta extract data from JavaScript-rendered pages?
LinkMeta's extraction engine processes the HTML response returned by the server. For the vast majority of websites, metadata tags (Open Graph, Twitter Cards, JSON-LD) are present in the initial HTML response because search engines and social platforms also rely on server-rendered metadata. Single-page applications that inject metadata exclusively through client-side JavaScript are rare, but if you encounter one, the API will extract whatever metadata is available in the initial HTML document.
Can I use LinkMeta in production applications?
Absolutely. LinkMeta is designed for production use. The API runs on Azure infrastructure with automated health monitoring, and uptime is tracked on the status page. For best results in production, implement client-side caching with a TTL of 1 to 24 hours depending on how frequently your target URLs change their metadata. This reduces latency and ensures your application remains responsive even during network fluctuations.
What is the difference between Open Graph tags and Twitter Card tags?
Open Graph (OG) is a protocol created by Facebook that defines how web pages appear when shared on social platforms. Twitter Cards are Twitter's own metadata format that controls how links render in tweets. Most websites implement both, and Twitter will fall back to Open Graph tags when Twitter Card tags are missing. LinkMeta extracts both separately so you can see exactly what each platform will display when your URL is shared.
Does LinkMeta support batch extraction?
Yes. The POST /api/v1/extract/batch endpoint accepts an array of up to 10 URLs and returns metadata for all of them in a single response. This is significantly faster than making individual requests because the API processes URLs concurrently on the server side. Batch extraction is ideal for SEO audit tools, content management dashboards, and any workflow that needs metadata from multiple pages at once.
How do I integrate LinkMeta with AI tools via MCP?
LinkMeta provides an official MCP (Model Context Protocol) server that allows AI assistants like Claude, VS Code Copilot, and Cursor to extract URL metadata directly. Install the linkmeta-mcp npm package and configure it in your MCP client settings. This enables AI-powered workflows where your assistant can fetch and analyze webpage metadata as part of research, content creation, or SEO analysis tasks.
Related Resources
Continue learning about URL metadata extraction and link preview generation with these related guides:
- How to Extract Open Graph Tags from Any URL — deep dive into OG tag parsing and validation
- Link Preview API: Generate Rich Previews for Any URL — building link preview cards with LinkMeta
- LinkMeta API Documentation — complete endpoint reference with request/response schemas
- Interactive API Playground — test the API live in your browser with any URL