Guide March 2026 7 min read

LinkedIn Profile API for AI Agents: No MCP Server Required

Your agent needs LinkedIn data. Here's the simplest way to wire it up.

You're building an AI agent that needs LinkedIn data. Maybe it researches prospects before a sales call. Maybe it enriches leads in a pipeline. Maybe it qualifies inbound signups by checking their work history. The question is always the same: how do you give your agent access to LinkedIn profiles?

There are two paths. One involves running a persistent server. The other is a single HTTP request. This post walks through both so you can decide which fits your stack.

The MCP Server Approach (and Why It's Overkill)

MCP (Model Context Protocol) servers are the standard way to give AI agents tool access. You run a server process, point your agent at it via a config file, and the agent discovers available tools through the protocol. It works well for complex integrations: databases, file systems, IDEs, anything with multiple operations and shared state.

For LinkedIn data, an MCP server means:

The problem: all you actually need is "give me this person's LinkedIn data." That's one HTTP request. Running a persistent server for a single endpoint is like renting a warehouse to store a shoebox.

MCP servers make sense for stateful, multi-operation integrations. A database MCP server manages connections, transactions, and schema introspection across dozens of operations. A filesystem MCP server handles reads, writes, watches, and permissions. LinkedIn profile lookup is none of those things. It is a stateless data fetch. A REST API call is simpler in every dimension: setup, runtime, debugging, and portability.

The REST API Pattern (the Better Way)

Your agent already knows how to make HTTP requests. Every LLM framework, from OpenAI's function calling to Claude's tool use to LangChain's tool abstraction, supports defining tools that are just HTTP requests under the hood.

The pattern is straightforward:

  1. Define a tool that calls the ScrapeLinkedIn API
  2. Parse the JSON response
  3. Done

No server process to manage. No config file to distribute. No context bloat from tool descriptions that load on every conversation whether they're needed or not. The agent calls the tool when it needs LinkedIn data, and the tool makes one HTTP request.

Here's what this looks like in practice, across three different agent frameworks.

Claude Code: The Skill File Approach

Claude Code uses skill files: markdown documents that teach the agent how to use a tool. Drop a file in ~/.claude/skills/ and Claude Code can use it on demand. No server, no config, loads into context only when relevant.

Here's a complete skill file for LinkedIn profile lookup:

# LinkedIn Profile Lookup

## Usage
When you need LinkedIn profile data for a person, use the ScrapeLinkedIn API.

## Setup
Ensure SCRAPELINKEDIN_API_KEY is set in your environment.

## Single Profile Lookup
```bash
curl -s -X POST https://scrapelinkedin.com/api/v1/scrape \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"linkedin_url": "LINKEDIN_URL_HERE"}' | jq '.data'
```

## Search by Name + Company
```bash
curl -s -X POST https://scrapelinkedin.com/api/v1/scrape \
  -H "X-API-Key: $SCRAPELINKEDIN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"first_name": "NAME", "last_name": "NAME", "company_name": "COMPANY"}' | jq '.data'
```

## Response Fields
The API returns: full_name, headline, location, summary,
experience, education, and honors_and_awards.

## Cost
$0.01 per successful lookup. Failed lookups are free.
Results are cached for 24 hours (re-lookups are free).

Save that as ~/.claude/skills/linkedin.md and Claude Code can look up any LinkedIn profile on demand. No MCP server, no process management, no config file. The skill loads into context only when the agent determines it's relevant to the task.

This is the key advantage over an MCP server for simple integrations: zero overhead when the tool isn't needed, and zero infrastructure to maintain when it is.

Python Agent: OpenAI Function Calling

If you're building a custom agent with OpenAI's API, the LinkedIn lookup is just another function tool:

import os
import requests

def scrape_linkedin(url: str) -> dict:
    """Fetch LinkedIn profile data."""
    response = requests.post(
        "https://scrapelinkedin.com/api/v1/scrape",
        headers={
            "X-API-Key": os.environ["SCRAPELINKEDIN_API_KEY"],
            "Content-Type": "application/json"
        },
        json={"linkedin_url": url}
    )
    return response.json()["data"]

# Define as an OpenAI function tool
tools = [{
    "type": "function",
    "function": {
        "name": "scrape_linkedin",
        "description": "Get LinkedIn profile data for a person",
        "parameters": {
            "type": "object",
            "properties": {
                "linkedin_url": {"type": "string", "description": "LinkedIn profile URL"}
            },
            "required": ["linkedin_url"]
        }
    }
}]

When the model decides it needs LinkedIn data, it emits a tool call. Your code runs scrape_linkedin(), passes the result back, and the model continues with structured profile data in context. The entire integration is 20 lines.

Real-World Agent Workflow: Pre-Call Research

Here's a concrete example of how this plays out end to end.

You tell your agent: "I have a call with Sarah Chen from Stripe in 30 minutes. What should I know?"

The agent's chain of thought:

  1. I need background on Sarah Chen at Stripe
  2. Call scrape_linkedin("https://linkedin.com/in/sarahchen")
  3. Parse the response: experience history, education, headline, summary, honors and awards

The agent synthesizes a briefing:

"Sarah has been at Stripe for 3 years, previously at Square for 4 years. Stanford CS grad. She moved from engineering into product management, which means she thinks in systems and technical tradeoffs. Lead with technical depth and concrete implementation details, not marketing slides. She joined Stripe right after their Series H, so she's seen hypergrowth and likely cares about scalability and operational efficiency."

Total cost: $0.01 for the LinkedIn lookup. Total time: roughly 20 seconds including the LLM reasoning. No MCP server running in the background. No config file. Just one API call triggered by the agent when it needed data.

MCP Server vs. REST API: Side-by-Side

MCP Server REST API
Setup Config file + server process One environment variable
Runtime Always-on daemon On-demand HTTP call
Context overhead Tool descriptions loaded always Zero until called
Dependencies MCP SDK + server code curl or requests
Portability MCP-compatible agents only Any agent, any framework
Failure mode Server crashes = no data Stateless, retry = works

This is not an argument against MCP. MCP is excellent for what it was designed for: giving agents rich, stateful access to complex systems. But LinkedIn profile lookup is not a complex system. It is a single data fetch. Match the tool to the job.

When Should You Use an MCP Server?

To be fair, there are cases where wrapping LinkedIn data in an MCP server could make sense:

For everyone else, a REST API call gets you the same data with less code, less infrastructure, and fewer things that can break at 2 AM.

Get your API key. Give your agent LinkedIn data in 5 minutes.

$0.01 per profile. 5 free lookups on signup. No credit card required.

Get Your API Key

Related posts