Mcp
MCP Server
Connect AI assistants to your blog via Model Context Protocol
MCP Server
LxBlog exposes a Model Context Protocol (MCP) server that allows AI assistants like Claude, Cursor, Windsurf, and other MCP-compatible clients to interact with your blog content directly. The MCP server runs on the same deployment — no separate service needed.
What is MCP?
The Model Context Protocol is an open standard that lets AI applications discover and use tools exposed by external services. Instead of copying article content into prompts manually, an AI assistant can call LxBlog tools to list, search, and read your articles in real time.
Endpoint
/api/mcpMCP JSON-RPC endpoint. Accepts standard MCP protocol messages with Bearer token authentication.
Headers
| Name | Type | Description |
|---|---|---|
Authorization* | string | Bearer token: "Bearer lxb_at_..." |
Content-Type* | string | Must be "application/json" |
Authentication
The MCP endpoint uses the same OAuth Bearer token as the REST API. Include your access token in the Authorization header of every request. The token determines which blog the AI assistant can access.
Access tokens expire after 1 hour. If a tool call returns an authentication error, refresh your token using the /api/oauth/token endpoint with your refresh token, then update your MCP client configuration.
Configuration
Claude Desktop
Add the following to your Claude Desktop configuration file (claude_desktop_config.json):
{
"mcpServers": {
"lxblog": {
"url": "https://lxblog.app/api/mcp",
"headers": {
"Authorization": "Bearer lxb_at_your_access_token"
}
}
}
}Claude Code
Add the MCP server via the CLI:
claude mcp add lxblog \
--transport http \
--url https://lxblog.app/api/mcp \
--header "Authorization: Bearer lxb_at_your_access_token"Cursor
In your project's .cursor/mcp.json:
{
"mcpServers": {
"lxblog": {
"url": "https://lxblog.app/api/mcp",
"headers": {
"Authorization": "Bearer lxb_at_your_access_token"
}
}
}
}Local Development
For local development, replace the URL with your local lxblog instance:
{
"url": "http://localhost:3000/api/mcp",
"headers": {
"Authorization": "Bearer lxb_at_your_access_token"
}
}Available Tools
list_articles
List published blog articles with pagination. Returns article metadata, content, tags, and cover images.
Parameters
| Name | Type | Description |
|---|---|---|
page | integer | Page number (starts at 1)(default: 1) |
limit | integer | Articles per page (max 100)(default: 20) |
{
"articles": [
{
"id": "clx...",
"slug": "getting-started-with-ai",
"title": "Getting Started with AI",
"description": "A beginner's guide...",
"content": "# Getting Started with AI\n\n...",
"coverImageUrl": "https://...",
"tags": ["ai", "tutorial"],
"publishedAt": "2025-01-15T10:00:00.000Z",
"contentHash": "a1b2c3..."
}
],
"pagination": { "page": 1, "limit": 20, "total": 12, "totalPages": 1 }
}get_article
Get a single published article by its slug or CUID. Returns the full article including markdown content.
Parameters
| Name | Type | Description |
|---|---|---|
identifier* | string | Article slug (e.g. "my-article") or CUID |
search_articles
Search published articles by keyword or tag. Matches against title, description, and tags.
Parameters
| Name | Type | Description |
|---|---|---|
query | string | Search query (matches title and description) |
tag | string | Filter by exact tag name |
limit | integer | Maximum results to return (max 50)(default: 10) |
sync_articles
Diff-based article sync. Send your known articles (id + contentHash) and receive only new, updated, and deleted articles. Send an empty array for initial sync to get all articles.
Parameters
| Name | Type | Description |
|---|---|---|
knownArticles | array | Articles you already have. Each object needs "id" and "contentHash" fields.(default: []) |
{
"new": [{ "id": "...", "slug": "new-article", "title": "...", ... }],
"updated": [{ "id": "...", "slug": "changed-article", "title": "...", ... }],
"deleted": ["old-article-id"],
"syncedAt": "2025-01-15T12:00:00.000Z"
}get_blog_info
Get blog metadata including name, description, language, article count, and connection status. No parameters required.
{
"blog": {
"id": "clx...",
"name": "My Tech Blog",
"slug": "my-tech-blog",
"description": "A blog about technology",
"language": "en",
"articleCount": 42
},
"connection": {
"connectedAt": "2025-01-01T00:00:00.000Z",
"lastSyncAt": "2025-01-15T12:00:00.000Z"
}
}Usage Examples
Asking Claude about your blog
Once configured, you can ask your AI assistant natural language questions and it will use the LxBlog tools automatically:
"List all my published articles"
→ Uses list_articles
"What articles do I have about machine learning?"
→ Uses search_articles with query "machine learning"
"Show me the full content of my article about React hooks"
→ Uses search_articles then get_article
"How many articles does my blog have?"
→ Uses get_blog_info
"What articles have been updated since my last sync?"
→ Uses sync_articles with your cached article hashesTesting with cURL
You can test the MCP endpoint directly with cURL by sending JSON-RPC messages:
# Initialize the MCP session
curl -X POST https://lxblog.app/api/mcp \
-H "Authorization: Bearer lxb_at_your_token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"clientInfo": { "name": "test", "version": "1.0.0" },
"capabilities": {}
}
}'
# List available tools
curl -X POST https://lxblog.app/api/mcp \
-H "Authorization: Bearer lxb_at_your_token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}'
# Call a tool
curl -X POST https://lxblog.app/api/mcp \
-H "Authorization: Bearer lxb_at_your_token" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "list_articles",
"arguments": { "page": 1, "limit": 5 }
}
}'Scopes
MCP tools respect the same OAuth scopes as the REST API:
articles:read— Required forlist_articles,get_article,search_articles, andsync_articlesblog:read— Sufficient forget_blog_info
The MCP server is stateless — each request is independent. There are no sessions to manage or connections to keep alive. This makes it ideal for serverless deployments on Vercel.