Tutorial March 2026 8 min read

How to Scrape LinkedIn Profiles with Python (2026)

3 approaches, from worst to best. With code you can copy and run.

Every developer building a GTM automation stack eventually needs LinkedIn data. Job titles, companies, education history, skills. The profile is the single richest public source of B2B contact intelligence, and getting it into your pipeline programmatically is harder than it should be.

Here are 3 ways to scrape LinkedIn profiles with Python, ordered from most painful to most practical.

Method 1: DIY with Selenium or Playwright

The first instinct is to automate a browser. Log in with your credentials, navigate to a profile URL, parse the DOM. It looks straightforward:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
import time

driver = webdriver.Chrome()
driver.get("https://www.linkedin.com/login")

# Log in
driver.find_element(By.ID, "username").send_keys("you@example.com")
driver.find_element(By.ID, "password").send_keys("your_password")
driver.find_element(By.CSS_SELECTOR, "button[type='submit']").click()
time.sleep(3)

# Navigate to a profile
driver.get("https://www.linkedin.com/in/satyanadella")
time.sleep(5)

# Try to extract data
name = driver.find_element(By.CSS_SELECTOR, "h1").text
headline = driver.find_element(By.CSS_SELECTOR, ".text-body-medium").text
print(f"{name} - {headline}")

This works once. Maybe twice. Then reality sets in.

LinkedIn detects automation through browser fingerprinting, request patterns, and session analysis. After roughly 50 requests, you will hit CAPTCHAs, temporary blocks, or a permanent account ban. The selectors change without notice as LinkedIn updates their frontend, so your scraper breaks every few weeks. You end up maintaining a fragile, stateful system that spends more time broken than working.

Verdict: Works for 10 profiles. Breaks at 100. Not viable for anything resembling production use.

Method 2: Browser Extensions (PhantomBuster, LinkedIn Helper)

Cloud-based browser automation tools abstract away the Selenium pain. You install an extension, configure a "phantom" or workflow, and point it at a list of profile URLs. The tool runs a headless browser in the cloud, handles login sessions, and exports CSV files.

The problems are different but still blocking for developers:

Verdict: Fine for manual prospecting and one-off research. Not a fit for developers building automated enrichment pipelines.

Method 3: REST API (ScrapeLinkedIn.com)

The approach that actually works at scale: call an API, get structured JSON back. No browser, no sessions, no selectors to maintain.

Here is a complete working example using requests:

import requests

API_KEY = "sk_your_key"
BASE_URL = "https://scrapelinkedin.com/api/v1"

# Scrape a profile
response = requests.post(
    f"{BASE_URL}/scrape",
    headers={"X-API-Key": API_KEY, "Content-Type": "application/json"},
    json={"linkedin_url": "https://linkedin.com/in/satyanadella"}
)

data = response.json()
print(data["data"]["profile_data"]["full_name"])      # "Satya Nadella"
print(data["data"]["profile_data"]["headline"])        # "Chairman and CEO at Microsoft"
print(data["credits_remaining"])                       # 4

That is the entire integration. One HTTP call, structured response, no state to manage.

Batch scraping: up to 1,000 profiles per request

For larger jobs, the batch endpoint lets you submit a list of URLs and poll for results:

# Batch: up to 1,000 profiles in one request
batch = requests.post(
    f"{BASE_URL}/scrape/batch",
    headers={"X-API-Key": API_KEY, "Content-Type": "application/json"},
    json={"linkedin_urls": ["linkedin.com/in/person1", "linkedin.com/in/person2"]}
)
batch_id = batch.json()["batch_id"]

# Poll for results
import time
while True:
    status = requests.get(
        f"{BASE_URL}/scrape/batch/{batch_id}",
        headers={"X-API-Key": API_KEY}
    )
    result = status.json()
    if result["status"] in ("completed", "partial", "timed_out"):
        break
    time.sleep(10)

for profile in result["results"]:
    print(f"{profile['full_name']} - {profile['headline']}")

Pricing

$0.01 per profile. You get 5 free credits on signup, no credit card required. No monthly commitment, no minimum spend. You buy credits and use them when you need them.

Verdict: The right choice for developers. One dependency, deterministic output, scales to thousands of profiles without infrastructure headaches.

Quick comparison

Selenium Extensions API
Setup time Hours Minutes Seconds
Cost per profile Free* $0.14+ $0.01
Scales to 1,000+ No Manual Yes
Programmatic Yes No Yes
Maintenance High None None

*Free but costs engineering time and banned accounts.

Getting started

Three steps:

  1. Register. Create an account at /docs. You get 5 free credits immediately.
  2. Verify. Check your email and verify your account.
  3. Scrape. Use your API key to start pulling profile data.

If you want to test before writing any Python, here is a one-liner:

curl -X POST https://scrapelinkedin.com/api/v1/scrape \
  -H "X-API-Key: sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{"linkedin_url": "https://linkedin.com/in/satyanadella"}'

Full endpoint reference, response schemas, and error codes are in the API documentation.

Try it free. 5 profiles, no credit card.

Get structured LinkedIn profile data in seconds.

Get Your API Key

Related posts