Skip to main content

How to Get Twitter Followers and Following Lists via API

The follower and following lists of any public X (formerly Twitter) account are some of the most valuable datasets available on the platform. A follower list tells you exactly who is interested in a brand, topic, or personality. A following list reveals who that account pays attention to - their influences, competitors, and information sources. Together, they map out the social graph around any account. Sorsa API provides two endpoints for extracting this data: /followers (who follows an account) and /follows (who an account is following). Both return up to 200 full user profiles per request, with cursor-based pagination to walk through the complete list. Every user object includes bio, follower counts, tweet counts, location, verified status, profile image, and more - giving you rich audience data, not just a list of handles. Why this matters and why you need an API for it. If you have ever tried to scroll through a large follower list on x.com, you know the UI cuts you off after a few hundred results. The web interface is designed for casual browsing, not data collection. For any serious audience research - lead generation, competitive analysis, influencer discovery, academic study - an API is the only reliable way to extract a complete follower or following list with profile metadata attached. This guide covers both endpoints from the simplest possible request to production-scale extraction with filtering, audience overlap analysis, and CSV export.

Simplest Example: Get Followers

One request, one response. This is all you need to fetch the first page of followers for any public account:

cURL

curl "https://api.sorsa.io/v3/followers?username=stripe" \
  -H "ApiKey: YOUR_API_KEY"

Python

import requests

resp = requests.get(
    "https://api.sorsa.io/v3/followers",
    headers={"ApiKey": "YOUR_API_KEY"},
    params={"username": "stripe"},
)
for user in resp.json().get("users", []):
    print(f"@{user['username']} - {user.get('description', '')[:80]}")

JavaScript

const resp = await fetch(
  "https://api.sorsa.io/v3/followers?username=stripe",
  { headers: { "ApiKey": "YOUR_API_KEY" } }
);
const { users } = await resp.json();
users.forEach((u) => console.log(`@${u.username} - ${u.description?.slice(0, 80)}`));
That is it. A GET request with your API key and a username. The response is an array of up to 200 user objects with full profile data.

Simplest Example: Get Following (Subscriptions)

The /follows endpoint works identically but returns the accounts that a user is following:
curl "https://api.sorsa.io/v3/follows?username=stripe" \
  -H "ApiKey: YOUR_API_KEY"
resp = requests.get(
    "https://api.sorsa.io/v3/follows",
    headers={"ApiKey": "YOUR_API_KEY"},
    params={"username": "stripe"},
)
for user in resp.json().get("users", []):
    print(f"@{user['username']} ({user['followers_count']} followers)")

Endpoint Reference

Both endpoints are GET requests with the same input options:

GET /v3/followers

Returns users who follow the specified account.

GET /v3/follows

Returns accounts that the specified user is following.

Input Parameters (query string)

ParameterTypeRequiredDescription
usernamestringOne ofThe handle without @. Example: stripe
user_idstringtheseThe numeric user ID. Example: 44196397
user_linkstringthreeFull profile URL. Example: https://x.com/stripe
cursorintegerNoPagination cursor from a previous response.
You must provide exactly one of username, user_id, or user_link.

Response

{
  "users": [
    {
      "id": "1234567890",
      "username": "developer_jane",
      "display_name": "Jane Chen",
      "description": "Full-stack developer. Building things with APIs.",
      "location": "San Francisco, CA",
      "followers_count": 4820,
      "followings_count": 312,
      "tweets_count": 1847,
      "media_count": 89,
      "created_at": "Mon Jan 15 08:22:41 +0000 2018",
      "verified": false,
      "protected": false,
      "profile_image_url": "https://pbs.twimg.com/profile_images/...",
      "bio_urls": ["https://janechen.dev"],
      "pinned_tweet_ids": ["1987654321098765432"]
    }
  ],
  "next_cursor": 1234567890
}
Each user object includes: id, username, display_name, description, location, created_at, followers_count, followings_count, favourites_count, tweets_count, media_count, profile_image_url, profile_background_image_url, bio_urls, pinned_tweet_ids, verified, can_dm, protected, and possibly_sensitive. Each page returns up to 200 user objects - among the highest per-request yields available from any X data provider. When next_cursor is present in the response, more results are available. Pass it as the cursor parameter in your next request. When it is absent or null, you have reached the end of the list.

Paginating Through a Full Follower List

A single request returns one page. To collect the complete follower list, loop through pages using next_cursor until it is absent.

Python

import requests
import time

API_KEY = "YOUR_API_KEY"

def get_all_followers(username, max_pages=50):
    """Fetch the complete follower list of a public account."""
    all_users = []
    cursor = None

    for page in range(max_pages):
        params = {"username": username}
        if cursor:
            params["cursor"] = cursor

        resp = requests.get(
            "https://api.sorsa.io/v3/followers",
            headers={"ApiKey": API_KEY},
            params=params,
        )
        resp.raise_for_status()
        data = resp.json()

        users = data.get("users", [])
        all_users.extend(users)
        print(f"Page {page + 1}: {len(users)} followers (total: {len(all_users)})")

        cursor = data.get("next_cursor")
        if not cursor:
            print("Complete.")
            break
        time.sleep(0.1)

    return all_users


followers = get_all_followers("stripe", max_pages=100)
print(f"\nTotal followers collected: {len(followers)}")

JavaScript

const API_KEY = "YOUR_API_KEY";

async function getAllFollowers(username, maxPages = 50) {
  const allUsers = [];
  let cursor = null;

  for (let page = 0; page < maxPages; page++) {
    const params = new URLSearchParams({ username });
    if (cursor) params.set("cursor", cursor);

    const resp = await fetch(
      `https://api.sorsa.io/v3/followers?${params}`,
      { headers: { "ApiKey": API_KEY } }
    );
    if (!resp.ok) throw new Error(`HTTP ${resp.status}`);

    const data = await resp.json();
    allUsers.push(...(data.users || []));

    console.log(`Page ${page + 1}: ${data.users?.length || 0} followers (total: ${allUsers.length})`);

    cursor = data.next_cursor;
    if (!cursor) break;
    await new Promise((r) => setTimeout(r, 100));
  }
  return allUsers;
}

const followers = await getAllFollowers("stripe");
The same pagination pattern works for /follows - just change the URL.

Getting a Full Following List

The code is identical with the endpoint swapped. Looking at who an account follows is often more revealing than looking at their followers - a founder’s following list tells you which investors, partners, and competitors they pay attention to; an influencer’s following list reveals their information sources.
def get_all_following(username, max_pages=50):
    """Fetch the complete list of accounts a user follows."""
    all_users = []
    cursor = None

    for page in range(max_pages):
        params = {"username": username}
        if cursor:
            params["cursor"] = cursor

        resp = requests.get(
            "https://api.sorsa.io/v3/follows",
            headers={"ApiKey": API_KEY},
            params=params,
        )
        resp.raise_for_status()
        data = resp.json()

        users = data.get("users", [])
        all_users.extend(users)

        cursor = data.get("next_cursor")
        if not cursor:
            break
        time.sleep(0.1)

    return all_users


following = get_all_following("naval", max_pages=20)
print(f"@naval follows {len(following)} accounts")

# Sort by follower count to see the biggest names
following.sort(key=lambda u: u.get("followers_count", 0), reverse=True)
for u in following[:10]:
    print(f"  @{u['username']} ({u['followers_count']:,} followers)")

Practical Applications

Filtering Followers by Profile Criteria

The raw list is useful, but filtering it makes it actionable. Since every user object includes full profile metadata, you can segment the audience by any attribute:
followers = get_all_followers("competitor_handle", max_pages=20)

# High-value accounts: 1K+ followers, active (100+ tweets), not protected
qualified = [
    u for u in followers
    if u.get("followers_count", 0) >= 1000
    and u.get("tweets_count", 0) >= 100
    and not u.get("protected", False)
]
print(f"Qualified leads: {len(qualified)} out of {len(followers)} total")

# Accounts with websites in their bio (potential business leads)
with_websites = [u for u in followers if u.get("bio_urls")]
print(f"Accounts with website links: {len(with_websites)}")

# Filter by location keyword
in_usa = [
    u for u in followers
    if "usa" in (u.get("location") or "").lower()
    or "united states" in (u.get("location") or "").lower()
    or ", us" in (u.get("location") or "").lower()
]
print(f"US-based followers: {len(in_usa)}")

Finding Audience Overlap Between Competitors

Pull follower lists from multiple competitors and find users who follow two or more of them. These are the most engaged people in your market - they have actively opted in to the topic multiple times:
from collections import Counter

competitors = ["competitor1", "competitor2", "competitor3"]
all_ids = []

for handle in competitors:
    followers = get_all_followers(handle, max_pages=10)
    ids = [u["id"] for u in followers]
    all_ids.extend(ids)
    print(f"@{handle}: {len(followers)} followers collected")

# Count how many competitor lists each user appears in
counts = Counter(all_ids)
overlap = {uid: count for uid, count in counts.items() if count >= 2}
print(f"\nUsers following 2+ competitors: {len(overlap)}")

Discovering What Industry Leaders Follow

Scrape the following list of an expert or thought leader to find who they consider worth paying attention to. This surfaces niche accounts, emerging voices, and tools that leaders rely on - intelligence that is hard to get any other way:
following = get_all_following("pmarca", max_pages=10)

print(f"@pmarca follows {len(following)} accounts. Top by follower count:")
following.sort(key=lambda u: u.get("followers_count", 0), reverse=True)
for u in following[:15]:
    print(f"  @{u['username']} ({u['followers_count']:,} followers)")
    print(f"    {u.get('description', '')[:70]}\n")

Exporting to CSV

Write any follower or following list to a CSV file for analysis in Excel, Google Sheets, Pandas, or a CRM import:
import csv

def export_users_to_csv(users, output_file="users.csv"):
    """Export a list of user objects to CSV."""
    fields = [
        "id", "username", "display_name", "description",
        "location", "followers_count", "followings_count",
        "tweets_count", "verified", "created_at",
    ]

    with open(output_file, "w", newline="", encoding="utf-8") as f:
        writer = csv.DictWriter(f, fieldnames=fields)
        writer.writeheader()
        for u in users:
            writer.writerow({
                "id": u.get("id", ""),
                "username": u.get("username", ""),
                "display_name": u.get("display_name", ""),
                "description": (u.get("description") or "").replace("\n", " "),
                "location": u.get("location", ""),
                "followers_count": u.get("followers_count", 0),
                "followings_count": u.get("followings_count", 0),
                "tweets_count": u.get("tweets_count", 0),
                "verified": u.get("verified", False),
                "created_at": u.get("created_at", ""),
            })

    print(f"Exported {len(users)} users to {output_file}")


followers = get_all_followers("stripe", max_pages=50)
export_users_to_csv(followers, "stripe_followers.csv")

following = get_all_following("stripe", max_pages=50)
export_users_to_csv(following, "stripe_following.csv")

Estimating API Usage at Scale

Each page returns up to 200 user objects. For planning:
Account SizePages NeededRequests
1,000 followers55
10,000 followers5050
100,000 followers500500
1,000,000 followers5,0005,000
At Sorsa’s rate limit of 20 requests per second, extracting 10,000 followers takes about 3 seconds; 100,000 followers takes about 25 seconds. For accounts with millions of followers, consider sampling (e.g., first 50 pages = ~10,000 followers) rather than a full extraction, unless you specifically need complete coverage.

Data Freshness and Edge Cases

Follower ordering. The /followers endpoint returns followers in the order X provides them, which is generally reverse-chronological (newest followers first). This means the first pages contain the most recently acquired followers. Protected accounts. If the target account is protected (private), the follower and following lists are not accessible. The endpoint will return an error. Follower count vs. extracted list. The followers_count on a profile is a real-time counter maintained by X. The extractable list may differ slightly due to suspended, deactivated, or recently removed accounts. This is a platform-level behavior, not Sorsa-specific. Profile data is current. Each user object reflects the profile as it exists at the time of the request (current bio, current follower count, current username), not the state it was in when the follow relationship was established.

Next Steps

  • Finding Your Target Audience - combine follower extraction with bio search, community scraping, and content mining for comprehensive audience discovery.
  • Competitor Analysis - use follower and following data as part of a competitive intelligence pipeline.
  • Verified Followers - the /verified-followers endpoint returns only verified accounts following a given handle.
  • Pagination - general pagination patterns and best practices for large-scale extraction.
  • API Reference - full specification for /followers, /follows, /verified-followers, and all 38 endpoints.