Skip to main content

How to Search for Tweets Using an API: Complete Guide with Code Examples

Searching for tweets programmatically is one of the most valuable capabilities a developer can have. Whether you need to build a social listening dashboard, run sentiment analysis on brand mentions, or extract Twitter data for market research - it all starts with a reliable tweet search API. In this guide, you will learn how to search tweets using the Sorsa API (/v3/search-tweets), master advanced query operators for precise data extraction, and build production-ready scripts in Python and JavaScript that handle pagination, filtering, and error handling. Sorsa API is a developer-friendly alternative to the official X (formerly Twitter) API. It provides read-only access to public X data with simple API-key authentication, no OAuth flow, and no approval process. If you have ever struggled with Twitter API rate limits, pricing tiers, or the complexity of getting approved - Sorsa gives you the same search power with a fraction of the setup.

Why Search Tweets Programmatically?

The X (Twitter) search bar is great for casual browsing, but it falls short when you need to collect data at scale, apply complex filters, or integrate tweet data into your own applications. A tweet search API unlocks use cases that manual searching simply cannot handle: Social listening and brand monitoring. Track what people say about your company, product, or competitors across the entire platform. Instead of refreshing a search tab, you get structured JSON data delivered straight to your pipeline - ready for dashboards, alerts, or analysis. Market research and trend analysis. Identify emerging conversations in any niche - crypto, AI, finance, health, gaming. By combining keyword search with engagement filters (like minimum likes or retweets), you can surface the signal from the noise and spot trends before they hit mainstream. Sentiment analysis. Feed tweet text into NLP models to gauge public opinion on a topic, product launch, or event. The search API gives you the raw text and engagement metrics; your model does the rest. Lead generation and sales intelligence. Find people asking for recommendations, complaining about a competitor, or looking for solutions your product offers. A well-crafted search query is essentially a lead magnet. Academic and journalistic research. Researchers and journalists regularly need to collect public discourse around events, policies, or social phenomena. A programmable search endpoint makes this repeatable and auditable.

The Search Tweets Endpoint: Request and Parameters

To search for tweets with Sorsa API, send a POST request to:
POST https://api.sorsa.io/v3/search-tweets
Authentication is a single header - ApiKey: YOUR_API_KEY (case-sensitive). No Bearer tokens, no OAuth dance.

Request Body (JSON)

ParameterTypeRequiredDescription
querystringYesYour search keywords. Supports all native X search operators.
orderstringNo"popular" (default) - matches the “Top” tab in X search. "latest" - chronological, newest first.
next_cursorstringNoPagination cursor from a previous response. Omit on the first request.

Minimal cURL Example

curl -X POST https://api.sorsa.io/v3/search-tweets \
  -H "ApiKey: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "artificial intelligence",
    "order": "latest"
  }'
That’s it. One endpoint, three parameters, and you are pulling tweets from the global X firehose.

Understanding the Response

The API returns a JSON object with two fields: an array of tweet objects and a pagination cursor.
{
  "tweets": [
    {
      "id": "2029914600217473314",
      "full_text": "The latest breakthroughs in artificial intelligence are reshaping how we think about automation.",
      "created_at": "Fri Mar 06 13:38:49 +0000 2026",
      "lang": "en",
      "likes_count": 142,
      "retweet_count": 38,
      "reply_count": 12,
      "quote_count": 5,
      "view_count": 28400,
      "bookmark_count": 19,
      "is_reply": false,
      "is_quote_status": false,
      "conversation_id_str": "2029914600217473314",
      "entities": [],
      "user": {
        "id": "1422280682240450563",
        "username": "tech_insider",
        "display_name": "Tech Insider",
        "description": "Breaking tech news and analysis.",
        "followers_count": 84200,
        "verified": true
      }
    }
  ],
  "next_cursor": "DAABCgABGSmiaxkAAgoAAgjEJ..."
}
Every tweet in the tweets array comes with the full text, all engagement metrics (likes, retweets, replies, quotes, views, bookmarks), language tag, and the complete profile of the author embedded in the user object. This means a single API call gives you both content data and author data - no need for a second request to look up who posted the tweet. The next_cursor string is your key to pagination. When it is present, more results are available. When it is null or absent, you have reached the end.
For a full breakdown of every field in the Tweet and User objects, see Response Format.

Advanced Query Operators: Precision Search for X Data

The query field is not just a keyword box. It supports the full range of native X search operators, giving you the same filtering power as the search bar on x.com - and more, because you can combine them programmatically.

Keywords and Phrases

  • artificial intelligence - matches tweets containing ANY of these words
  • "artificial intelligence" - matches the EXACT phrase only
  • AI machine learning - matches tweets containing both “AI” and one of “machine”/“learning”

User-Based Operators

  • from:elonmusk - tweets posted by a specific account
  • to:openai - tweets replying to a specific account
  • @sorsa_app - tweets mentioning a specific account

Engagement Filters (Minimum Thresholds)

  • min_faves:100 - only tweets with at least 100 likes
  • min_retweets:50 - only tweets with at least 50 retweets
  • min_replies:10 - only tweets with at least 10 replies
These are extremely useful for filtering out low-quality or spam content. Searching for crypto min_faves:500 will surface only the most engaged-with crypto tweets - perfect for trend analysis or influencer discovery.

Content Filters

  • filter:links - only tweets containing URLs
  • filter:media - only tweets with images or videos
  • filter:images - only tweets with images
  • filter:videos - only tweets with videos
  • -filter:replies - exclude replies (show only original tweets)
  • -filter:retweets - exclude retweets
  • -filter:links - exclude tweets with links (useful for removing bot/spam content)

Language and Date Range

  • lang:en - only English tweets (use any ISO 639-1 code: es, fr, de, ja, etc.)
  • since:2026-01-01 - tweets posted on or after this date
  • until:2026-03-01 - tweets posted before this date

Exclusion and Boolean Logic

  • crypto -scam -airdrop - search for “crypto” but exclude tweets containing “scam” or “airdrop”
  • (bitcoin OR ethereum) min_faves:100 lang:en - English tweets about bitcoin or ethereum with 100+ likes

Real-World Query Examples

Here are practical query strings for common use cases: Brand monitoring:
("your brand" OR "@yourbrand") -from:yourbrand -filter:retweets lang:en
This finds what others say about your brand, excluding your own posts and retweets, in English only. Competitor intelligence:
(from:competitor1 OR from:competitor2) min_faves:50 -filter:replies
Surface your competitors’ most engaging original content. Crypto sentiment tracking:
(bitcoin OR $BTC) (bullish OR bearish OR crash OR moon) min_faves:20 lang:en since:2026-01-01
English tweets with sentiment-laden keywords about Bitcoin, with at least 20 likes, from 2026. Product feedback mining:
"your product name" (bug OR issue OR love OR amazing OR hate) -filter:retweets
Find organic user feedback - both positive and negative.
For the complete list of available operators, see our Search Operators Reference.

Pagination: How to Collect Thousands of Tweets

A single search request returns one page of results (typically around 20 tweets). To collect larger datasets - hundreds or thousands of tweets - you use cursor-based pagination. The logic is straightforward:
  1. First request: Send your query and order. Do not include next_cursor.
  2. Read the cursor: The response contains a next_cursor string.
  3. Next request: Send the same query, same order, but add the next_cursor value you just received.
  4. Repeat until next_cursor is null, empty, or absent - that means you have fetched all available results.
This approach is more reliable than offset-based pagination because it handles real-time data insertion gracefully. New tweets posted between your requests will not cause duplicates or skipped results.
For a deep dive into pagination patterns, see Pagination.

Code Examples: Python and JavaScript

Below are production-ready scripts that search for tweets, handle pagination automatically, and collect results into a usable data structure. Both examples use the correct POST method and response keys.

Python: Search and Paginate

import requests
import time

API_KEY = "YOUR_API_KEY"
URL = "https://api.sorsa.io/v3/search-tweets"

def search_tweets(query, order="popular", max_pages=5):
    """
    Search for tweets using Sorsa API with automatic pagination.
    
    Args:
        query: Search string (supports X operators)
        order: "popular" (Top tab) or "latest" (chronological)
        max_pages: Maximum number of pages to fetch
    
    Returns:
        List of tweet objects
    """
    all_tweets = []
    next_cursor = None

    for page in range(max_pages):
        body = {
            "query": query,
            "order": order,
        }
        if next_cursor:
            body["next_cursor"] = next_cursor

        response = requests.post(
            URL,
            headers={
                "ApiKey": API_KEY,
                "Content-Type": "application/json",
            },
            json=body,
        )
        response.raise_for_status()
        data = response.json()

        tweets = data.get("tweets", [])
        all_tweets.extend(tweets)
        print(f"Page {page + 1}: fetched {len(tweets)} tweets (total: {len(all_tweets)})")

        next_cursor = data.get("next_cursor")
        if not next_cursor:
            print("No more results available.")
            break

        # Respect the 20 req/s rate limit
        time.sleep(0.1)

    return all_tweets


# --- Usage ---

# Basic keyword search
tweets = search_tweets("artificial intelligence", order="latest", max_pages=3)

# Brand monitoring with engagement filter
tweets = search_tweets('"Sorsa API" min_faves:5 lang:en', max_pages=10)

# Print results
for tweet in tweets:
    user = tweet["user"]
    print(f"@{user['username']} ({user['followers_count']} followers)")
    print(f"  {tweet['full_text'][:120]}...")
    print(f"  Likes: {tweet['likes_count']} | RTs: {tweet['retweet_count']} | Views: {tweet.get('view_count', 'N/A')}")
    print()

JavaScript (Node.js): Search and Paginate

const API_KEY = "YOUR_API_KEY";
const URL = "https://api.sorsa.io/v3/search-tweets";

async function searchTweets(query, order = "popular", maxPages = 5) {
  const allTweets = [];
  let nextCursor = null;

  for (let page = 0; page < maxPages; page++) {
    const body = { query, order };
    if (nextCursor) body.next_cursor = nextCursor;

    const response = await fetch(URL, {
      method: "POST",
      headers: {
        "ApiKey": API_KEY,
        "Content-Type": "application/json",
      },
      body: JSON.stringify(body),
    });

    if (!response.ok) {
      throw new Error(`API error: ${response.status} ${response.statusText}`);
    }

    const data = await response.json();
    const tweets = data.tweets || [];
    allTweets.push(...tweets);

    console.log(`Page ${page + 1}: fetched ${tweets.length} tweets (total: ${allTweets.length})`);

    nextCursor = data.next_cursor;
    if (!nextCursor) {
      console.log("No more results available.");
      break;
    }

    // Respect rate limits
    await new Promise((r) => setTimeout(r, 100));
  }

  return allTweets;
}

// --- Usage ---
(async () => {
  const tweets = await searchTweets("bitcoin lang:en min_faves:50", "latest", 5);

  for (const tweet of tweets) {
    console.log(`@${tweet.user.username}: ${tweet.full_text.slice(0, 100)}...`);
    console.log(`  Likes: ${tweet.likes_count} | RTs: ${tweet.retweet_count}\n`);
  }
})();

Full Working Example: Export Tweet Search Results to CSV

A common pipeline is: search tweets, paginate through results, then export everything to a CSV file for analysis in Excel, Google Sheets, or a data science notebook. Here is a complete Python script that does exactly that.
import requests
import time
import csv

API_KEY = "YOUR_API_KEY"
URL = "https://api.sorsa.io/v3/search-tweets"

def search_and_export(query, order="popular", max_pages=10, output_file="tweets.csv"):
    """Search tweets and export to CSV."""

    fieldnames = [
        "tweet_id", "created_at", "full_text", "lang",
        "likes", "retweets", "replies", "quotes", "views",
        "author_id", "username", "display_name", "followers_count", "verified",
    ]

    with open(output_file, "w", newline="", encoding="utf-8") as f:
        writer = csv.DictWriter(f, fieldnames=fieldnames)
        writer.writeheader()

        next_cursor = None
        total = 0

        for page in range(max_pages):
            body = {"query": query, "order": order}
            if next_cursor:
                body["next_cursor"] = next_cursor

            resp = requests.post(
                URL,
                headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
                json=body,
            )
            resp.raise_for_status()
            data = resp.json()

            for tweet in data.get("tweets", []):
                user = tweet.get("user", {})
                writer.writerow({
                    "tweet_id": tweet["id"],
                    "created_at": tweet["created_at"],
                    "full_text": tweet["full_text"],
                    "lang": tweet.get("lang", ""),
                    "likes": tweet.get("likes_count", 0),
                    "retweets": tweet.get("retweet_count", 0),
                    "replies": tweet.get("reply_count", 0),
                    "quotes": tweet.get("quote_count", 0),
                    "views": tweet.get("view_count", 0),
                    "author_id": user.get("id", ""),
                    "username": user.get("username", ""),
                    "display_name": user.get("display_name", ""),
                    "followers_count": user.get("followers_count", 0),
                    "verified": user.get("verified", False),
                })
                total += 1

            next_cursor = data.get("next_cursor")
            print(f"Page {page + 1} done. Total tweets saved: {total}")

            if not next_cursor:
                break
            time.sleep(0.1)

    print(f"Export complete: {total} tweets saved to {output_file}")


# --- Run ---
search_and_export(
    query='(bitcoin OR ethereum) lang:en min_faves:10 -filter:retweets',
    order="latest",
    max_pages=20,
    output_file="crypto_tweets.csv",
)
This script handles everything: authentication, pagination, flattening nested user objects, and writing clean CSV rows. You can adjust max_pages to control how deep you scrape. At roughly 20 tweets per page, max_pages=50 will give you around 1,000 tweets.

Tips for Effective Tweet Searching

Start with order: "popular" for research, switch to "latest" for monitoring. The popular sort surfaces high-engagement content - great for trend analysis and finding influencers. The latest sort gives you chronological data - essential for real-time alerts and complete data collection within a time window. Use engagement filters to cut through noise. Searching for a broad keyword like "AI" will return millions of results, most of them low-effort posts. Adding min_faves:10 or min_retweets:5 dramatically improves signal quality without losing important content. Combine -filter:replies with -filter:retweets for original content only. When you want to analyze what people are actually saying (not just reacting to), this combo strips away noise and gives you only original tweets. Add lang:xx directly in the query string. If your analysis or LLM pipeline expects English text, always include lang:en in the query. This is more reliable than filtering post-hoc, because the API handles language detection on its side. Respect rate limits. Sorsa API allows 20 requests per second per API key. The time.sleep(0.1) in the examples above keeps you well within this limit. For high-volume scraping jobs, see Rate Limits for throttling strategies. Use date ranges for historical analysis. Combine since: and until: operators to search within a specific window - great for studying reactions to events, product launches, or campaigns.

Sorsa Search vs. Other Twitter/X Data APIs

If you have worked with the official X API v2, you know the pain: OAuth 2.0 setup, developer portal approval, limited search history (7 days on Basic, 30 on Pro), and pricing that starts at $100/month and quickly climbs to thousands. Sorsa API simplifies this dramatically:
  • No OAuth. A single API key in a header. You can be making search requests within 60 seconds of signing up.
  • No approval process. Create an account, get your key, start searching.
  • Full search operator support. Every operator that works in the X search bar works in the Sorsa query field.
  • Consistent response format. Every tweet comes with all engagement metrics and full author profile. No need to specify tweet.fields or expansions.
  • Simple POST-based interface. No URL parameter length limits. Your complex Boolean queries go safely in the JSON body.
For developers building X data scrapers, social media monitoring tools, or research pipelines, this means less boilerplate code, faster development cycles, and more time spent on what matters - analyzing the data.

Next Steps

  • Search Operators Reference - full list of Boolean operators and filters you can use in the query field.
  • Search Mentions Guide - track direct @mentions of any account with the dedicated /mentions endpoint and its rich filter set.
  • Pagination - deep dive into cursor-based pagination for large-scale data collection.
  • Response Format - complete schema reference for Tweet and User objects.
  • API Reference - full specification for all 38 Sorsa API endpoints.