Skip to main content

How to Analyze Competitors on Twitter Using the API

Understanding what your competitors are doing on X (formerly Twitter) - how fast they are growing, what content drives their engagement, who follows them, and what the public says about them - gives you a strategic advantage that gut instinct alone cannot match. With Sorsa API, you can build a structured competitor intelligence pipeline that goes far beyond manually checking a rival’s profile once a month. This guide walks through a complete competitor analysis workflow: benchmarking profiles, dissecting content strategies, mapping audience composition, and tracking public sentiment. Each phase uses a different Sorsa endpoint, and together they form a repeatable system you can run weekly or monthly to stay ahead.
No-code option: If you want a quick side-by-side comparison without writing any code, use the free Sorsa Profile Comparison Tool. Enter two handles and instantly see followers, engagement rate, average likes/retweets per tweet, posting frequency, and account age compared visually. It is the fastest way to get a competitive snapshot.

Phase 1: Profile Benchmarking

Endpoints: GET /v3/info and GET /v3/info-batch Before analyzing content or sentiment, establish the baseline numbers: follower count, tweet volume, account age, bio positioning, and verified status. The /info endpoint returns a complete profile snapshot for a single account; /info-batch does the same for multiple accounts in one call.

Comparing Multiple Competitors at Once

import requests

API_KEY = "YOUR_API_KEY"

def get_profiles(usernames):
    """Fetch profiles for multiple accounts in a single request."""
    resp = requests.get(
        "https://api.sorsa.io/v3/info-batch",
        headers={"ApiKey": API_KEY},
        params={"usernames": usernames},
    )
    resp.raise_for_status()
    return resp.json().get("users", [])


competitors = ["stripe", "square", "wise"]
profiles = get_profiles(competitors)

# Print a comparison table
print(f"{'Handle':<16} {'Followers':>12} {'Tweets':>10} {'Following':>10} {'Verified':>8}")
print("-" * 60)
for p in profiles:
    print(f"@{p['username']:<15} {p['followers_count']:>12,} {p['tweets_count']:>10,} "
          f"{p['followings_count']:>10,} {str(p.get('verified', False)):>8}")

Tracking Growth Over Time

A single snapshot tells you where competitors stand today. To understand momentum, you need to compare snapshots over time. Run the script above on a daily or weekly schedule (via cron, GitHub Actions, or any task scheduler), store the results in a database or CSV, and compute deltas:
import csv
from datetime import date

def log_snapshot(profiles, output_file="competitor_snapshots.csv"):
    """Append today's snapshot to a running CSV log."""
    today = date.today().isoformat()
    file_exists = False
    try:
        with open(output_file, "r"):
            file_exists = True
    except FileNotFoundError:
        pass

    with open(output_file, "a", newline="") as f:
        writer = csv.writer(f)
        if not file_exists:
            writer.writerow(["date", "username", "followers", "tweets", "following"])
        for p in profiles:
            writer.writerow([
                today, p["username"], p["followers_count"],
                p["tweets_count"], p["followings_count"],
            ])

log_snapshot(profiles)
After two weeks of daily snapshots, you can calculate growth rates:
Growth Rate % = ((Followers Today - Followers 14 Days Ago) / Followers 14 Days Ago) * 100
A competitor growing at 2% per week while you grow at 0.5% is a signal worth investigating - what are they doing differently?

Phase 2: Content Strategy Analysis

Endpoint: POST /v3/user-tweets Numbers tell you that a competitor is growing; their content tells you why. Scrape a competitor’s recent tweets and analyze what formats, topics, and posting patterns drive their engagement.

Fetching Recent Tweets

import requests
import time

API_KEY = "YOUR_API_KEY"

def get_competitor_tweets(username, max_pages=10):
    """Fetch recent tweets from a competitor's timeline."""
    all_tweets = []
    next_cursor = None

    for page in range(max_pages):
        body = {"link": f"https://x.com/{username}"}
        if next_cursor:
            body["next_cursor"] = next_cursor

        resp = requests.post(
            "https://api.sorsa.io/v3/user-tweets",
            headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
            json=body,
        )
        resp.raise_for_status()
        data = resp.json()

        tweets = data.get("tweets", [])
        all_tweets.extend(tweets)

        next_cursor = data.get("next_cursor")
        if not next_cursor:
            break
        time.sleep(0.1)

    return all_tweets

Extracting Content Insights

Once you have the tweets, compute the metrics that reveal content strategy patterns:
def analyze_content_strategy(tweets, username):
    """Compute content strategy metrics from a list of tweets."""
    if not tweets:
        print(f"No tweets found for @{username}")
        return

    # Engagement stats
    likes = [t.get("likes_count", 0) for t in tweets]
    rts = [t.get("retweet_count", 0) for t in tweets]
    replies_count = [t.get("reply_count", 0) for t in tweets]

    avg_likes = sum(likes) / len(likes)
    avg_rts = sum(rts) / len(rts)

    # Content mix
    total = len(tweets)
    original = sum(1 for t in tweets if not t.get("is_reply") and not t.get("retweeted_status"))
    replies = sum(1 for t in tweets if t.get("is_reply"))
    quotes = sum(1 for t in tweets if t.get("is_quote_status"))
    has_media = sum(1 for t in tweets if t.get("entities"))

    # Top performing content
    top_by_likes = sorted(tweets, key=lambda t: t.get("likes_count", 0), reverse=True)[:3]

    print(f"\n{'='*50}")
    print(f"Content Analysis: @{username} ({total} tweets)")
    print(f"{'='*50}")
    print(f"  Avg likes/tweet:    {avg_likes:.1f}")
    print(f"  Avg retweets/tweet: {avg_rts:.1f}")
    print(f"  Content mix:")
    print(f"    Original posts:   {original} ({original/total*100:.0f}%)")
    print(f"    Replies:          {replies} ({replies/total*100:.0f}%)")
    print(f"    Quote tweets:     {quotes} ({quotes/total*100:.0f}%)")
    print(f"    With media:       {has_media} ({has_media/total*100:.0f}%)")
    print(f"\n  Top 3 posts by likes:")
    for i, t in enumerate(top_by_likes, 1):
        print(f"    {i}. ({t['likes_count']} likes) {t['full_text'][:80]}...")


# Run for each competitor
for handle in ["stripe", "square", "wise"]:
    tweets = get_competitor_tweets(handle, max_pages=5)
    analyze_content_strategy(tweets, handle)
This tells you: Does the competitor post mostly original content or engage heavily in replies? Do they use media? What topics drive their highest engagement? A competitor whose top posts are all product announcements tells a different story than one whose top posts are memes or industry commentary. For historical content analysis (e.g., comparing Q1 vs Q2 strategy), add since: and until: operators to your search query. See Historical Data Access for the full approach.

Phase 3: Audience Composition

Endpoints: GET /v3/followers, GET /v3/verified-followers, GET /v3/followers-stats A competitor’s follower list reveals who their audience actually is. You can extract followers, filter for high-value accounts, and even find overlap between your audience and theirs.

Verified/High-Authority Followers

The /verified-followers endpoint returns only verified accounts following a given handle - a quick way to see which notable people, brands, and journalists are in a competitor’s orbit:
def get_verified_followers(username, max_pages=5):
    """Get verified followers of a competitor account."""
    all_verified = []
    cursor = None

    for page in range(max_pages):
        params = {"username": username}
        if cursor:
            params["cursor"] = cursor

        resp = requests.get(
            "https://api.sorsa.io/v3/verified-followers",
            headers={"ApiKey": API_KEY},
            params=params,
        )
        resp.raise_for_status()
        data = resp.json()

        users = data.get("users", [])
        all_verified.extend(users)

        cursor = data.get("next_cursor")
        if not cursor:
            break
        time.sleep(0.1)

    return all_verified


for handle in ["stripe", "square"]:
    verified = get_verified_followers(handle, max_pages=3)
    print(f"\n@{handle}: {len(verified)} verified followers")
    for v in sorted(verified, key=lambda u: u.get("followers_count", 0), reverse=True)[:5]:
        print(f"  @{v['username']} ({v['followers_count']:,} followers)")

Follower Category Breakdown (Crypto/Web3)

For accounts in the Sorsa database (primarily crypto/Web3), the /followers-stats endpoint provides a categorical breakdown of followers: how many are influencers, projects, and VC accounts:
def get_follower_breakdown(username):
    resp = requests.get(
        "https://api.sorsa.io/v3/followers-stats",
        headers={"ApiKey": API_KEY},
        params={"username": username},
    )
    resp.raise_for_status()
    return resp.json()


for handle in ["aaboronin", "VitalikButerin"]:
    stats = get_follower_breakdown(handle)
    print(f"\n@{handle} follower breakdown:")
    print(f"  Total followers: {stats['followers_count']}")
    print(f"  Influencers:     {stats['influencers_count']}")
    print(f"  Projects:        {stats['projects_count']}")
    print(f"  VCs:             {stats['venture_capitals_count']}")

Phase 4: Public Sentiment and Reputation

Endpoint: POST /v3/mentions What a competitor posts is only half the picture. What the public says about them reveals complaints, praise, feature requests, and PR vulnerabilities you can act on.

Pulling High-Engagement Mentions

def get_competitor_mentions(handle, min_likes=20, max_pages=5):
    """Fetch what the public says about a competitor."""
    all_mentions = []
    next_cursor = None

    for page in range(max_pages):
        body = {
            "query": handle,
            "order": "popular",
            "min_likes": min_likes,
        }
        if next_cursor:
            body["next_cursor"] = next_cursor

        resp = requests.post(
            "https://api.sorsa.io/v3/mentions",
            headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
            json=body,
        )
        resp.raise_for_status()
        data = resp.json()

        all_mentions.extend(data.get("tweets", []))
        next_cursor = data.get("next_cursor")
        if not next_cursor:
            break
        time.sleep(0.1)

    return all_mentions

Simple Sentiment Categorization

A rough keyword-based categorization gives you a fast read on whether mentions skew positive, negative, or neutral:
positive_words = {"love", "great", "amazing", "best", "awesome", "excellent", "recommend", "thank"}
negative_words = {"hate", "terrible", "worst", "awful", "bad", "broken", "scam", "disappointed", "frustrated"}

def categorize_mentions(mentions):
    pos, neg, neutral = [], [], []
    for m in mentions:
        words = set(m["full_text"].lower().split())
        if words & negative_words:
            neg.append(m)
        elif words & positive_words:
            pos.append(m)
        else:
            neutral.append(m)
    return pos, neg, neutral


for handle in ["stripe", "square"]:
    mentions = get_competitor_mentions(handle, min_likes=10, max_pages=5)
    pos, neg, neutral = categorize_mentions(mentions)

    print(f"\n@{handle} mention sentiment ({len(mentions)} total):")
    print(f"  Positive: {len(pos)} | Negative: {len(neg)} | Neutral: {len(neutral)}")

    if neg:
        print(f"  Top complaint:")
        top_neg = max(neg, key=lambda m: m.get("likes_count", 0))
        print(f"    @{top_neg['user']['username']}: {top_neg['full_text'][:100]}...")
Negative mentions with high engagement are particularly valuable - they reveal pain points that the competitor has not addressed. If users consistently complain about a specific feature or policy, that is an opportunity for your positioning. For production-grade sentiment analysis, pipe the full_text into an NLP model (OpenAI, HuggingFace, or any classifier) instead of using keyword matching.

Putting It All Together: Competitive Dashboard Script

Here is a single script that runs all four phases for a list of competitors and prints a consolidated report:
import requests
import time

API_KEY = "YOUR_API_KEY"
COMPETITORS = ["competitor1", "competitor2", "competitor3"]

def header(text):
    print(f"\n{'='*60}\n{text}\n{'='*60}")

header("PHASE 1: PROFILE BENCHMARKS")
profiles = get_profiles(COMPETITORS)
print(f"{'Handle':<16} {'Followers':>12} {'Tweets':>10} {'Verified':>8}")
for p in profiles:
    print(f"@{p['username']:<15} {p['followers_count']:>12,} {p['tweets_count']:>10,} "
          f"{str(p.get('verified', False)):>8}")

header("PHASE 2: CONTENT STRATEGY")
for handle in COMPETITORS:
    tweets = get_competitor_tweets(handle, max_pages=3)
    analyze_content_strategy(tweets, handle)

header("PHASE 3: VERIFIED FOLLOWERS")
for handle in COMPETITORS:
    verified = get_verified_followers(handle, max_pages=2)
    print(f"@{handle}: {len(verified)} verified followers")

header("PHASE 4: PUBLIC SENTIMENT")
for handle in COMPETITORS:
    mentions = get_competitor_mentions(handle, min_likes=10, max_pages=3)
    pos, neg, neutral = categorize_mentions(mentions)
    print(f"@{handle}: {len(pos)} positive, {len(neg)} negative, {len(neutral)} neutral")
Run this weekly or monthly to track how the competitive landscape shifts. Store the output alongside your own metrics to see where you are gaining or losing ground.

The Free Comparison Tool

For quick, visual competitor benchmarking without writing code, use the Sorsa Profile Comparison Tool. Enter any two X handles and get an instant side-by-side view of:
  • Follower count
  • Engagement rate
  • Average likes and retweets per tweet
  • Posting frequency (tweets per day)
  • Account age
  • Profile bios and locations
This is useful for ad-hoc checks, client presentations, or validating your API-based analysis against a visual reference. The tool is free and requires no API key or login.

Next Steps