Skip to main content

How to Track Twitter Mentions of Any Account via API

Every day, thousands of conversations happen around your brand on X (formerly Twitter) that you never see. Users tag your handle to report bugs, praise your product, ask questions, or vent frustrations - and most of these mentions disappear into the timeline within minutes. For competitors, the story is the same: their customers are publicly sharing exactly what they love and hate, and that intelligence is sitting there, uncollected. The Sorsa API /mentions endpoint turns this stream of public conversation into structured, filterable data. Point it at any public handle, set engagement thresholds to cut through spam, define a date window, and get back clean JSON with full tweet content and author profiles. This guide covers everything from your first API call to production workflows for brand monitoring, support triage, competitive auditing, and campaign measurement.

Quick Start: Your First Mention Query

Before we dive into parameters and workflows, here is the fastest path to pulling mentions. This single cURL command fetches recent high-engagement mentions of any account:
curl -X POST https://api.sorsa.io/v3/mentions \
  -H "ApiKey: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "AppleSupport",
    "order": "latest",
    "min_likes": 10,
    "since_date": "2026-03-01"
  }'
That is it - a POST request with your API key and a JSON body. The response is an array of tweet objects, each containing the mention text, engagement metrics, and the full profile of the person who posted it. No OAuth, no developer portal, no field selection boilerplate.
Tip: You can also test this endpoint without writing code using the API Playground.
Now let’s look at what you can do with the full parameter set.

Endpoint Reference

POST https://api.sorsa.io/v3/mentions
ParameterTypeRequiredDescription
querystringYesThe handle to track, without the @ symbol. Example: "elonmusk"
orderstringNo"latest" (default, chronological) or "popular" (engagement-ranked).
since_datestringNoStart date in YYYY-MM-DD format.
until_datestringNoEnd date in YYYY-MM-DD format.
min_likesintegerNoMinimum likes a mention must have.
min_retweetsintegerNoMinimum retweets a mention must have.
min_repliesintegerNoMinimum replies a mention must have.
next_cursorstringNoPagination cursor from a previous response.
The response follows the standard Sorsa format:
{
  "tweets": [
    {
      "id": "2031847200012345678",
      "full_text": "@AppleSupport My iPhone keeps restarting after the latest update. Anyone else?",
      "created_at": "Sat Mar 08 14:22:31 +0000 2026",
      "likes_count": 47,
      "retweet_count": 12,
      "reply_count": 8,
      "view_count": 15200,
      "lang": "en",
      "is_reply": false,
      "user": {
        "id": "9876543210",
        "username": "frustrated_user",
        "display_name": "Alex",
        "followers_count": 1240,
        "verified": false
      }
    }
  ],
  "next_cursor": "DAABCgABGSmiaxkA..."
}
Each mention arrives with all engagement metrics and the complete author profile embedded. You can immediately assess both what was said and who said it - without a second API call. When next_cursor is present, more pages are available. When it is null or absent, you have reached the end.

When to Use /mentions vs. /search-tweets

Both endpoints can surface tweets that reference a brand or account, but they solve different problems: Use /mentions when you want everything directed at a specific handle. The endpoint is tuned for this - it catches @-tags, replies, and references that a keyword search might miss. The built-in min_likes, min_retweets, min_replies, since_date, and until_date parameters make it cleaner to build programmatic queries without encoding engagement filters as search operators inside a query string. Use /search-tweets when you want to track a brand name that people mention without tagging (e.g., users writing “I love Nike” without the @). For that, you would search for "Nike" -from:Nike lang:en using the search endpoint. You should also use /search-tweets when you need complex Boolean logic that goes beyond what /mentions supports - combining multiple keywords, excluding specific terms, filtering by media type, etc. See Search Tweets for the full guide. In many production setups, you use both: /mentions to catch direct @-tags and /search-tweets to catch “dark mentions” where users talk about your brand without tagging you. Together, they give you complete coverage.

Scenario 1: Reputation Dashboard for a Brand

You manage social for a consumer brand that gets hundreds of mentions daily. Most are bot tags, spam, and zero-engagement noise. You need a dashboard showing only the mentions that actually matter - the ones other people are seeing and reacting to.

Python Implementation

import requests
import time

API_KEY = "YOUR_API_KEY"
URL = "https://api.sorsa.io/v3/mentions"

def get_high_impact_mentions(handle, min_likes=50, max_pages=10):
    """Fetch only the mentions with real engagement."""
    all_mentions = []
    next_cursor = None

    for page in range(max_pages):
        body = {
            "query": handle,
            "order": "popular",
            "min_likes": min_likes,
        }
        if next_cursor:
            body["next_cursor"] = next_cursor

        resp = requests.post(
            URL,
            headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
            json=body,
        )
        resp.raise_for_status()
        data = resp.json()

        tweets = data.get("tweets", [])
        all_mentions.extend(tweets)

        next_cursor = data.get("next_cursor")
        if not next_cursor:
            break
        time.sleep(0.1)

    return all_mentions


mentions = get_high_impact_mentions("nike", min_likes=100)
print(f"Found {len(mentions)} high-impact mentions of @nike\n")

for m in mentions[:5]:
    print(f"@{m['user']['username']} ({m['user']['followers_count']} followers)")
    print(f"  {m['full_text'][:100]}...")
    print(f"  Likes: {m['likes_count']} | Views: {m.get('view_count', 'N/A')}\n")
Setting min_likes: 100 with order: "popular" gives you a clean feed of mentions that have actual audience reach. For a smaller brand, drop the threshold to 5 or 10.

JavaScript Implementation

const API_KEY = "YOUR_API_KEY";
const URL = "https://api.sorsa.io/v3/mentions";

async function getHighImpactMentions(handle, minLikes = 50, maxPages = 10) {
  const allMentions = [];
  let nextCursor = null;

  for (let page = 0; page < maxPages; page++) {
    const body = { query: handle, order: "popular", min_likes: minLikes };
    if (nextCursor) body.next_cursor = nextCursor;

    const resp = await fetch(URL, {
      method: "POST",
      headers: { "ApiKey": API_KEY, "Content-Type": "application/json" },
      body: JSON.stringify(body),
    });
    if (!resp.ok) throw new Error(`HTTP ${resp.status}`);

    const data = await resp.json();
    allMentions.push(...(data.tweets || []));

    nextCursor = data.next_cursor;
    if (!nextCursor) break;
    await new Promise((r) => setTimeout(r, 100));
  }
  return allMentions;
}

// Usage
const mentions = await getHighImpactMentions("nike", 100);
console.log(`Found ${mentions.length} high-impact mentions`);

Scenario 2: Customer Support Queue

Support accounts need the opposite strategy: catch every mention, including ones with zero engagement, because each one could be a customer waiting for help. The key parameters here are order: "latest" (chronological) and no engagement filters.
def get_support_queue(handle, since_date=None):
    """Pull all recent mentions for a support team to process."""
    body = {"query": handle, "order": "latest"}
    if since_date:
        body["since_date"] = since_date

    resp = requests.post(
        URL,
        headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
        json=body,
    )
    resp.raise_for_status()
    return resp.json().get("tweets", [])


mentions = get_support_queue("YourBrandSupport", since_date="2026-03-10")

# Simple intent categorization
support_keywords = {"help", "issue", "broken", "bug", "error", "fix", "problem", "crash"}
positive_keywords = {"love", "amazing", "great", "thanks", "awesome", "perfect"}

for m in mentions:
    words = set(m["full_text"].lower().split())
    if words & support_keywords:
        tag = "SUPPORT"
    elif words & positive_keywords:
        tag = "POSITIVE"
    else:
        tag = "OTHER"
    print(f"[{tag}] @{m['user']['username']}: {m['full_text'][:100]}")
In a production system, you would poll this every 15-60 seconds and route new mentions into your ticketing system (Zendesk, Intercom, Linear) or a Slack channel. See the Real-Time Monitoring guide for the complete polling pattern with deduplication.

Scenario 3: Measuring Campaign Impact

After a product launch or marketing push, you need to answer: how many people talked about us, how much engagement did those mentions generate, and who were the loudest voices?
def measure_campaign(handle, start, end, max_pages=50):
    """Collect all mentions in a campaign window and compute aggregate stats."""
    all_mentions = []
    next_cursor = None

    for page in range(max_pages):
        body = {
            "query": handle,
            "order": "latest",
            "since_date": start,
            "until_date": end,
        }
        if next_cursor:
            body["next_cursor"] = next_cursor

        resp = requests.post(
            URL,
            headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
            json=body,
        )
        resp.raise_for_status()
        data = resp.json()

        all_mentions.extend(data.get("tweets", []))
        next_cursor = data.get("next_cursor")
        if not next_cursor:
            break
        time.sleep(0.1)

    # Aggregate
    total_likes = sum(m.get("likes_count", 0) for m in all_mentions)
    total_rts = sum(m.get("retweet_count", 0) for m in all_mentions)
    total_views = sum(m.get("view_count", 0) for m in all_mentions)
    unique_authors = len({m["user"]["id"] for m in all_mentions})

    print(f"Campaign Report: @{handle} ({start} to {end})")
    print(f"  Total mentions:  {len(all_mentions)}")
    print(f"  Unique authors:  {unique_authors}")
    print(f"  Combined likes:  {total_likes:,}")
    print(f"  Combined RTs:    {total_rts:,}")
    print(f"  Combined views:  {total_views:,}")

    # Top 3 most-liked mentions
    top = sorted(all_mentions, key=lambda m: m.get("likes_count", 0), reverse=True)[:3]
    print(f"\n  Top mentions:")
    for m in top:
        print(f"    @{m['user']['username']} ({m['likes_count']} likes): {m['full_text'][:80]}...")

    return all_mentions


data = measure_campaign("yourbrand", "2026-02-01", "2026-02-14")
This produces a concise campaign report: total volume, unique voices, aggregate engagement, and the three most-liked mentions. Export the full dataset to CSV (see below) for deeper analysis in a spreadsheet or BI tool.

Scenario 4: Competitive Benchmarking

The query parameter accepts any public handle, not just your own. Run identical analysis on multiple competitors to compare public attention and sentiment.
competitors = ["competitor1", "competitor2", "competitor3"]

for handle in competitors:
    mentions = get_high_impact_mentions(handle, min_likes=20, max_pages=5)
    if not mentions:
        print(f"@{handle}: no high-impact mentions found\n")
        continue

    avg_likes = sum(m["likes_count"] for m in mentions) / len(mentions)
    avg_followers = sum(m["user"]["followers_count"] for m in mentions) / len(mentions)

    print(f"@{handle}: {len(mentions)} mentions | avg likes: {avg_likes:.0f} | avg author followers: {avg_followers:.0f}")
    for m in mentions[:2]:
        print(f"  \"{m['full_text'][:70]}...\"")
    print()
This gives you a quick competitive snapshot: who is getting the most visible public attention, what the average engagement looks like, and what people are actually saying. Do this weekly and you have a lightweight competitive intelligence pipeline. For a more comprehensive approach, see Competitor Analysis.

Exporting Mentions to CSV

For serious analysis - pivot tables, sentiment classification, time series charts - you need the data in a flat file. Here is a complete export script:
import requests
import time
import csv

API_KEY = "YOUR_API_KEY"
URL = "https://api.sorsa.io/v3/mentions"

def export_mentions(handle, output="mentions.csv", since=None, until=None,
                    min_likes=0, max_pages=50):
    fields = [
        "tweet_id", "created_at", "full_text", "lang",
        "likes", "retweets", "replies", "quotes", "views",
        "author_id", "username", "display_name", "followers", "verified",
    ]

    with open(output, "w", newline="", encoding="utf-8") as f:
        writer = csv.DictWriter(f, fieldnames=fields)
        writer.writeheader()
        next_cursor = None
        total = 0

        for page in range(max_pages):
            body = {"query": handle, "order": "latest"}
            if since: body["since_date"] = since
            if until: body["until_date"] = until
            if min_likes > 0: body["min_likes"] = min_likes
            if next_cursor: body["next_cursor"] = next_cursor

            resp = requests.post(URL, headers={"ApiKey": API_KEY, "Content-Type": "application/json"}, json=body)
            resp.raise_for_status()
            data = resp.json()

            for t in data.get("tweets", []):
                u = t.get("user", {})
                writer.writerow({
                    "tweet_id": t["id"], "created_at": t["created_at"],
                    "full_text": t["full_text"], "lang": t.get("lang", ""),
                    "likes": t.get("likes_count", 0), "retweets": t.get("retweet_count", 0),
                    "replies": t.get("reply_count", 0), "quotes": t.get("quote_count", 0),
                    "views": t.get("view_count", 0), "author_id": u.get("id", ""),
                    "username": u.get("username", ""), "display_name": u.get("display_name", ""),
                    "followers": u.get("followers_count", 0), "verified": u.get("verified", False),
                })
                total += 1

            next_cursor = data.get("next_cursor")
            if not next_cursor: break
            time.sleep(0.1)

    print(f"Exported {total} mentions to {output}")


export_mentions("nike", output="nike_mentions_feb.csv", since="2026-02-01", until="2026-03-01", min_likes=5)

Real-Time Mention Alerts via Slack

Combine the /mentions endpoint with a simple polling loop to get instant Slack notifications when someone mentions your brand. This pattern is covered in depth in the Real-Time Monitoring guide, but here is the mentions-specific version:
import requests
import time

API_KEY = "YOUR_API_KEY"
SLACK_WEBHOOK = "https://hooks.slack.com/services/YOUR/SLACK/URL"
HANDLE = "yourbrand"
INTERVAL = 15  # seconds

last_seen_id = None
print(f"Watching @{HANDLE} for new mentions...")

while True:
    try:
        resp = requests.post(
            "https://api.sorsa.io/v3/mentions",
            headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
            json={"query": HANDLE, "order": "latest"},
        )
        resp.raise_for_status()
        tweets = resp.json().get("tweets", [])

        if tweets and last_seen_id is None:
            last_seen_id = tweets[0]["id"]
        elif tweets:
            new = [t for t in tweets if t["id"] > last_seen_id]
            for m in reversed(new):
                u = m["user"]
                requests.post(SLACK_WEBHOOK, json={"text": (
                    f"*New @{HANDLE} mention*\n"
                    f"@{u['username']} ({u['followers_count']} followers):\n"
                    f"{m['full_text']}\n"
                    f"<https://x.com/{u['username']}/status/{m['id']}|View on X>"
                )})
            if new:
                last_seen_id = new[0]["id"]

    except Exception as e:
        print(f"Error: {e}")
        time.sleep(INTERVAL * 2)
        continue
    time.sleep(INTERVAL)
For production, persist last_seen_id to a file or database so the monitor survives restarts. Add exponential backoff for HTTP 429 errors. Route different mention types (support requests vs. praise vs. press) to different Slack channels using keyword matching.

Common Pitfalls and How to Avoid Them

Setting min_likes too high and missing important mentions. A customer reporting a critical bug with 2 likes is more important than a meme with 500. For support use cases, always set min_likes to 0. Reserve high thresholds for reputation dashboards and trend analysis. Forgetting to paginate. A single request returns roughly 20 mentions. If your brand gets 200 mentions a day, a single page captures only 10% of the conversation. Always loop through next_cursor until it is empty when doing analysis or export. See Pagination for patterns and code examples. Not distinguishing between /mentions and /search-tweets results. People mention brands in two ways: by tagging (@nike) and by name without tagging (“love my new Nikes”). The /mentions endpoint only captures the first type. For full coverage, run a parallel /search-tweets query with the brand name as a keyword. Polling too aggressively for real-time alerts. Polling every second for a brand that gets 10 mentions per day wastes requests. Match your polling interval to your mention volume: every 15 seconds for high-traffic brands, every minute or two for smaller accounts. Keep in mind the rate limit of 20 requests per second. Not persisting last_seen_id across script restarts. If your monitoring script crashes and restarts without remembering its checkpoint, it either reprocesses old mentions (duplicate alerts) or skips the gap (missed mentions). Store the last ID in a file, Redis, or database.

Next Steps

  • Search Tweets - for keyword-based monitoring that catches mentions without @-tags.
  • Search Operators - combine operators with /search-tweets for complex mention queries involving media filters, Boolean logic, and geo.
  • Real-Time Monitoring - the complete polling architecture with deduplication, backoff, and multi-source monitoring.
  • Historical Data - pull mention data from months or years ago for longitudinal studies.
  • API Reference - full specification for /mentions and all Sorsa API endpoints.