Skip to main content

AI-Ready Documentation: Use Sorsa API Docs with LLMs

Sorsa documentation is optimized for use with AI assistants and large language models. You can feed our complete API reference directly into ChatGPT, Claude, Grok, Cursor, Copilot, or any other LLM to get accurate answers, generate integration code, and debug issues without manually reading through pages of docs.

The llm-docs.txt File

We maintain a single, machine-readable file that contains the full Sorsa API specification - every endpoint, parameter, response schema, and authentication detail in one place: https://docs.sorsa.io/llms.txt This file is designed to fit within the context window of modern LLMs. Paste it into a conversation, attach it as a file, or add it to your project context - and the AI will have complete knowledge of the Sorsa API.

How to Use It

ChatGPT / Claude / Grok / Gemini: Copy the contents of llm-docs.txt (or attach it as a file) at the start of your conversation, then ask questions like “Write a Python script that searches for tweets about AI and exports them to CSV” or “How do I paginate through all followers of an account?” Cursor / Copilot / Claude Code / Codex / Windsurf: Add the file to your project context or reference it in your prompt. The AI will generate code that uses correct endpoint URLs, parameters, headers, and response parsing - without hallucinating field names or inventing endpoints that don’t exist. Custom GPTs and assistants: Upload llm-docs.txt as a knowledge file when building a custom GPT or AI assistant that needs to work with X/Twitter data via Sorsa.

What’s Inside

The file contains a structured reference for every Sorsa API endpoint, including the base URL and authentication method, all endpoint paths with their HTTP methods, required and optional parameters for each endpoint, response field names and types, pagination patterns with ready-to-use code examples, the full list of search operators, and important edge cases like field name differences between Tweet and Article objects. It is kept up to date as the API evolves, so you always get the current specification.

Why This Matters

LLMs are only as good as the context they have. Without accurate API documentation, an AI assistant will guess at parameter names, invent response fields, or use outdated patterns. By providing llm-docs.txt, you get code that works on the first try instead of debugging AI-generated hallucinations.