Documentation Index
Fetch the complete documentation index at: https://docs.sorsa.io/llms.txt
Use this file to discover all available pages before exploring further.
Finding the right people on X (formerly Twitter) - potential customers, leads, community members, or partnership prospects - is one of the highest-value applications of social data. Sorsa API gives you multiple discovery paths: search user profiles by keyword, extract follower lists from competitor accounts, scrape community membership rosters, and mine tweet content to identify active voices in any niche.
This guide is a playbook of audience discovery techniques. Each one uses a different Sorsa endpoint (or combination of endpoints) to solve a specific targeting problem, with complete code examples you can adapt to your use case.
Technique 1: Search User Profiles by Keyword
Endpoint: POST /v3/search-users
The most direct way to find a specific persona. This endpoint scans user bios, display names, and handles for your keywords and returns matching profiles with full metadata.
When to use it: You need people who identify as something specific - “Product Manager”, “Solidity Developer”, “Fitness Coach”, “Founder”, “VC” - and say so in their profile.
Request
{
"query": "Product Manager",
"next_cursor": ""
}
Parameters
| Parameter | Type | Required | Description |
|---|
query | string | Yes | Keyword to match against bios, display names, and handles. |
next_cursor | string | No | Pagination cursor from a previous response. |
Python Example
import requests
import time
API_KEY = "YOUR_API_KEY"
URL = "https://api.sorsa.io/v3/search-users"
def find_users_by_bio(query, max_pages=5):
"""Search user profiles matching a keyword."""
all_users = []
next_cursor = None
for page in range(max_pages):
body = {"query": query}
if next_cursor:
body["next_cursor"] = next_cursor
resp = requests.post(
URL,
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json=body,
)
resp.raise_for_status()
data = resp.json()
users = data.get("users", [])
all_users.extend(users)
print(f"Page {page + 1}: {len(users)} users (total: {len(all_users)})")
next_cursor = data.get("next_cursor")
if not next_cursor:
break
time.sleep(0.1)
return all_users
# Find AI developers
users = find_users_by_bio("machine learning engineer", max_pages=10)
for u in users[:10]:
print(f"@{u['username']} | {u['followers_count']} followers")
print(f" {u.get('description', '')[:100]}\n")
JavaScript Example
const API_KEY = "YOUR_API_KEY";
async function findUsersByBio(query, maxPages = 5) {
const allUsers = [];
let nextCursor = null;
for (let page = 0; page < maxPages; page++) {
const body = { query };
if (nextCursor) body.next_cursor = nextCursor;
const resp = await fetch("https://api.sorsa.io/v3/search-users", {
method: "POST",
headers: { "ApiKey": API_KEY, "Content-Type": "application/json" },
body: JSON.stringify(body),
});
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
const data = await resp.json();
allUsers.push(...(data.users || []));
nextCursor = data.next_cursor;
if (!nextCursor) break;
await new Promise((r) => setTimeout(r, 100));
}
return allUsers;
}
const users = await findUsersByBio("DeFi builder");
users.slice(0, 5).forEach((u) => {
console.log(`@${u.username} (${u.followers_count} followers): ${u.description?.slice(0, 80)}`);
});
Refining Results
The /search-users response includes full profile data for each match: followers_count, followings_count, tweets_count, verified, description, location, and created_at. Use these fields to post-filter in your code:
# Filter for users with 1K+ followers who are likely professionals, not bots
qualified = [
u for u in users
if u.get("followers_count", 0) >= 1000
and u.get("tweets_count", 0) >= 100
and not u.get("protected", False)
]
Endpoint: GET /v3/followers
If a competitor, industry leader, or niche account has built an audience in your space, their follower list is your pre-qualified target audience. Every person who followed that account made a deliberate choice to engage with that topic.
When to use it: You have identified accounts whose audience overlaps with your ideal customer profile. You want a list of those followers with profile data for outreach, analysis, or ad targeting.
Request
GET https://api.sorsa.io/v3/followers?username=competitor_handle
Parameters
| Parameter | Type | Required | Description |
|---|
username | string | One of these | The handle to get followers for. |
user_id | string | required | Alternatively, the numeric user ID. |
user_link | string | | Alternatively, the full profile URL. |
cursor | integer | No | Pagination cursor. |
Python Example
import requests
import time
API_KEY = "YOUR_API_KEY"
def get_followers(username, max_pages=10):
"""Extract the follower list of any public account."""
all_followers = []
cursor = None
for page in range(max_pages):
params = {"username": username}
if cursor:
params["cursor"] = cursor
resp = requests.get(
"https://api.sorsa.io/v3/followers",
headers={"ApiKey": API_KEY},
params=params,
)
resp.raise_for_status()
data = resp.json()
users = data.get("users", [])
all_followers.extend(users)
print(f"Page {page + 1}: {len(users)} followers (total: {len(all_followers)})")
cursor = data.get("next_cursor")
if not cursor:
break
time.sleep(0.1)
return all_followers
# Get followers of a competitor
followers = get_followers("competitor_handle", max_pages=20)
print(f"\nCollected {len(followers)} followers")
# Filter for high-value accounts
high_value = [
f for f in followers
if f.get("followers_count", 0) >= 500
and f.get("tweets_count", 0) >= 50
]
print(f"High-value accounts (500+ followers, 50+ tweets): {len(high_value)}")
For a deeper dive into follower extraction with audience overlap analysis and CSV export, see Followers & Following.
Combining Multiple Competitor Lists
For a more comprehensive audience map, pull followers from several competitor accounts and find the overlap - users who follow multiple competitors are the most engaged segment in your market:
competitors = ["competitor1", "competitor2", "competitor3"]
follower_sets = {}
for handle in competitors:
followers = get_followers(handle, max_pages=10)
follower_sets[handle] = {f["id"] for f in followers}
print(f"@{handle}: {len(followers)} followers collected")
# Find users who follow 2+ competitors
from collections import Counter
all_ids = []
for ids in follower_sets.values():
all_ids.extend(ids)
id_counts = Counter(all_ids)
overlap = {uid for uid, count in id_counts.items() if count >= 2}
print(f"\nUsers following 2+ competitors: {len(overlap)}")
Technique 3: Discover Audience Through Communities
Endpoint: POST /v3/community-members
X Communities are self-selected groups of users who share an interest. A community called “DeFi Builders” or “Indie Hackers” is, by definition, a list of people who care about that topic. Scraping the membership gives you a pre-filtered audience.
When to use it: You have identified X Communities relevant to your niche and want to extract the member list for outreach, analysis, or ad targeting. For more on working with Communities and Lists, see Lists & Communities.
Request
{
"community_link": "1966045657589813686",
"next_cursor": ""
}
The community_link accepts either the numeric community ID or the full URL (e.g., https://x.com/i/communities/1966045657589813686).
Python Example
def get_community_members(community_id, max_pages=10):
"""Extract members of an X Community."""
all_members = []
next_cursor = None
for page in range(max_pages):
body = {"community_link": community_id}
if next_cursor:
body["next_cursor"] = next_cursor
resp = requests.post(
"https://api.sorsa.io/v3/community-members",
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json=body,
)
resp.raise_for_status()
data = resp.json()
users = data.get("users", [])
all_members.extend(users)
print(f"Page {page + 1}: {len(users)} members (total: {len(all_members)})")
next_cursor = data.get("next_cursor")
if not next_cursor:
break
time.sleep(0.1)
return all_members
members = get_community_members("1966045657589813686", max_pages=20)
print(f"Collected {len(members)} community members")
Endpoint: POST /v3/search-tweets
Sometimes the best way to find your audience is not by who they say they are, but by what they talk about. Search for tweets about a specific pain point, technology, or topic, then extract the author data from each result.
When to use it: You want people who are actively discussing a topic right now - not just people who put something in their bio years ago. This captures real-time intent.
Python Example
def find_active_voices(query, min_followers=100, max_pages=5):
"""
Search tweets by keyword and extract unique authors.
Returns a deduplicated list of users actively discussing the topic.
"""
seen_ids = set()
active_users = []
next_cursor = None
for page in range(max_pages):
body = {"query": query, "order": "latest"}
if next_cursor:
body["next_cursor"] = next_cursor
resp = requests.post(
"https://api.sorsa.io/v3/search-tweets",
headers={"ApiKey": API_KEY, "Content-Type": "application/json"},
json=body,
)
resp.raise_for_status()
data = resp.json()
for tweet in data.get("tweets", []):
user = tweet["user"]
if user["id"] not in seen_ids and user.get("followers_count", 0) >= min_followers:
seen_ids.add(user["id"])
active_users.append({
"username": user["username"],
"display_name": user.get("display_name", ""),
"description": user.get("description", ""),
"followers_count": user.get("followers_count", 0),
"verified": user.get("verified", False),
"sample_tweet": tweet["full_text"][:120],
})
next_cursor = data.get("next_cursor")
if not next_cursor:
break
time.sleep(0.1)
return active_users
# Find people actively asking about CRM tools
voices = find_active_voices(
'"need a CRM" OR "looking for CRM" OR "CRM recommendation" lang:en -filter:retweets',
min_followers=100,
max_pages=10,
)
print(f"Found {len(voices)} unique users discussing CRM tools\n")
for v in voices[:5]:
print(f"@{v['username']} ({v['followers_count']} followers)")
print(f" Bio: {v['description'][:80]}")
print(f" Said: \"{v['sample_tweet']}...\"\n")
This approach is particularly powerful for lead generation: people tweeting “I need a tool for X” or “can anyone recommend Y” are expressing buying intent in public. For the full range of query operators you can use, see Search Operators.
Query Ideas for Intent-Based Discovery
| Goal | Query |
|---|
| People looking for a product | "need a tool for" OR "recommend a" lang:en |
| People complaining about a competitor | "competitor name" (frustrated OR annoyed OR bad OR terrible) -from:competitor |
| People discussing a pain point | "struggling with" OR "how do you handle" your_topic lang:en |
| People celebrating a milestone | "just launched" OR "first customer" OR "hit 1000" your_niche lang:en |
Technique 5: Analyze Verified and High-Authority Followers
Endpoint: GET /v3/verified-followers
For influencer marketing or B2B outreach, you often want to focus on verified or high-authority accounts following a relevant handle. The /verified-followers endpoint returns only verified followers, giving you a shortlist of notable accounts in any audience.
Python Example
def get_verified_followers(username, max_pages=5):
"""Get only verified followers of an account."""
all_verified = []
cursor = None
for page in range(max_pages):
params = {"username": username}
if cursor:
params["cursor"] = cursor
resp = requests.get(
"https://api.sorsa.io/v3/verified-followers",
headers={"ApiKey": API_KEY},
params=params,
)
resp.raise_for_status()
data = resp.json()
users = data.get("users", [])
all_verified.extend(users)
cursor = data.get("next_cursor")
if not cursor:
break
time.sleep(0.1)
return all_verified
verified = get_verified_followers("openai", max_pages=10)
print(f"Verified followers of @openai: {len(verified)}\n")
# Sort by follower count to find the biggest names
verified.sort(key=lambda u: u.get("followers_count", 0), reverse=True)
for u in verified[:10]:
print(f"@{u['username']} ({u['followers_count']:,} followers): {u.get('description', '')[:60]}")
Choosing the Right Technique
Each method answers a different question about your audience:
| Question | Technique | Endpoint |
|---|
| Who identifies as my target persona? | Profile keyword search | /search-users |
| Who already follows accounts like mine? | Competitor follower extraction | /followers |
| Who joined a group around my topic? | Community member scraping | /community-members |
| Who is actively talking about my topic? | Tweet content mining | /search-tweets |
| Which notable accounts follow a leader in my space? | Verified follower analysis | /verified-followers |
In practice, you often combine multiple techniques. For example: search for “DevOps engineer” via /search-users to find your broad audience, then cross-reference with followers of a competitor to identify the warmest leads, and finally check /search-tweets for people actively complaining about existing tools to find immediate opportunities.
Exporting Audience Data to CSV
Whichever technique you use, the export pattern is the same. Here is a reusable function that takes any list of user objects and writes them to CSV:
import csv
def export_users_to_csv(users, output_file="audience.csv"):
"""Export a list of user objects to CSV."""
fields = [
"user_id", "username", "display_name", "description",
"followers_count", "followings_count", "tweets_count",
"location", "verified", "created_at",
]
with open(output_file, "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(f, fieldnames=fields)
writer.writeheader()
for u in users:
writer.writerow({
"user_id": u.get("id", ""),
"username": u.get("username", ""),
"display_name": u.get("display_name", ""),
"description": (u.get("description") or "").replace("\n", " "),
"followers_count": u.get("followers_count", 0),
"followings_count": u.get("followings_count", 0),
"tweets_count": u.get("tweets_count", 0),
"location": u.get("location", ""),
"verified": u.get("verified", False),
"created_at": u.get("created_at", ""),
})
print(f"Exported {len(users)} users to {output_file}")
# Works with output from any technique
export_users_to_csv(users, "ai_engineers.csv")
export_users_to_csv(followers, "competitor_followers.csv")
export_users_to_csv(members, "community_members.csv")
Next Steps
- Search Tweets - full guide to
/search-tweets for content-based audience discovery.
- Search Operators - Boolean logic and filters for precision queries.
- Competitor Analysis - use audience data as part of a competitive intelligence pipeline.
- Real-Time Monitoring - set up alerts to catch target audience members the moment they post.
- Track Mentions - track mentions of your brand or competitors to find engaged audience segments.
- API Reference - full specification for
/search-users, /followers, /verified-followers, /community-members, and all Sorsa API endpoints.