Skip to content

Blog

Zero-Dependency MCP Setup: Browser Debugging Without Node.js

Gasoline MCP gives AI coding assistants real-time browser observability — console logs, network traffic, WebSocket frames, DOM state — through a single Go binary with zero runtime dependencies. No Node.js. No Python. No Puppeteer. No node_modules/.

Most MCP-based browser debugging tools are built on Node.js. That means installing a JavaScript runtime, pulling hundreds of npm packages, and trusting a dependency tree you’ll never fully audit.

Chrome DevTools MCP requires Puppeteer, which alone downloads a bundled Chromium binary and brings roughly 200MB of dependencies. BrowserTools MCP needs multiple npm packages across two separate Node.js processes. Even simpler tools pull in dozens of transitive dependencies.

Every dependency is a surface. A compromised package in your node_modules/ folder can exfiltrate data, modify behavior, or introduce vulnerabilities that won’t show up until your next audit. Enterprise security teams know this. It’s why they flag tools with large dependency trees, and why supply chain attacks against npm packages have become one of the most common vectors in the JavaScript ecosystem.

Gasoline takes a different approach. The entire server is a single Go binary, roughly 10MB, compiled from a codebase with zero external Go dependencies (stdlib only). There is no runtime to install, no interpreter to maintain, and no package manager to trust.

The binary you download is the binary you run. There’s nothing else.

GasolineChrome DevTools MCPBrowserTools MCPCursor MCP Extension
Runtime requiredNoneNode.js 22+Node.jsNode.js
Install size~10MB~200MB+~150MB+~100MB+
Transitive deps0300+ (Puppeteer tree)100+50+
Supply chain surfaceSingle binarynpm registrynpm registrynpm registry
Reproducible buildYes (Go binary)Depends on lock fileDepends on lock fileDepends on lock file

When your security team evaluates a new tool, they ask: what does it install, what does it connect to, and what can it access?

With Gasoline, the answers are simple:

  • What does it install? One binary.
  • What does it connect to? Nothing. The server listens on 127.0.0.1 only and makes zero outbound network calls.
  • What can it access? Only the browser telemetry your extension sends to localhost.

This makes SOC2 audits straightforward. There’s no dependency manifest to review, no lock file drift to monitor, and no risk of a transitive dependency introducing a vulnerability between audits. The binary is statically compiled — you can checksum it, store it in your artifact registry, and know exactly what’s running in every environment.

You install Gasoline with npx gasoline-mcp@latest. That might look like a Node.js dependency — but it isn’t.

The npm package is a thin wrapper that downloads the correct prebuilt Go binary for your platform. Once installed, there is no Node.js process running. The MCP server is the Go binary, communicating over stdio. npx is just the delivery mechanism, chosen because it’s the convention MCP clients expect. If you prefer, you can download the binary directly from GitHub releases and skip npm entirely.

No. Gasoline’s MCP server is a compiled Go binary. It requires no runtime — not Node.js, not Python, not Java. The npx gasoline-mcp command downloads a prebuilt binary; it does not start a Node.js server. You can also install the binary directly without npm.

Zero. The Go server uses only the Go standard library. The Chrome extension is vanilla JavaScript with no frameworks or build tools. There are no transitive dependencies, no node_modules/, and no lock files to maintain.

Terminal window
npx gasoline-mcp@latest

One command. One binary. Nothing else to install, configure, or trust.

Learn more about Gasoline’s security architecture

Why document.body.innerHTML Ruins LLM Context Windows

Gasoline MCP gives AI coding assistants real-time browser context via the Model Context Protocol. One of the hardest problems it solves is this: how do you represent a web page to an LLM without blowing up the context window?

The most common answer in the wild is wrong.

Many MCP tools, browser automation scripts, and AI coding workflows grab DOM content the obvious way:

document.body.innerHTML

This dumps the entire raw HTML of the page into the LLM’s context window. Every ad banner. Every tracking pixel. Every inline style. Every SVG path definition. Every base64-encoded image. Every third-party script tag. Every CSS class name generated by your framework’s hash function.

A typical web page might contain 500KB of raw HTML. The actual meaningful content — the text, the form fields, the error messages your AI assistant needs to see — might be 5KB. That’s 99% waste in a context window with hard token limits.

Consider a React dashboard page. A SaaS admin panel with a sidebar, a data table, some charts, and a modal.

ApproachToken CountMeaningful Content
document.body.innerHTML~200,000 tokens~2,000 tokens
Accessibility tree~3,000 tokens~2,000 tokens

With innerHTML, you are burning 99% of your context budget on <div class="css-1a2b3c"> wrappers, Webpack chunk references, SVG coordinate data, and analytics scripts. In a model with a 128K token context window, a single innerHTML dump can consume more than the entire window — leaving zero room for conversation history, system prompts, or the code your assistant is actually working on.

Worse, the signal-to-noise ratio is so low that the LLM struggles to locate the relevant content even when it fits. Buried somewhere in 200K tokens of markup is the error message you need it to read.

Gasoline takes a fundamentally different approach. Instead of raw HTML, it uses the accessibility tree — the structured, semantic representation that browsers build for screen readers.

The accessibility tree contains only meaningful elements:

  • Headings and document structure
  • Buttons, links, and interactive controls
  • Form fields with their labels and current values
  • Text content that a user would actually read
  • ARIA labels and roles that describe element purpose
  • State information — checked, expanded, disabled, selected

It strips out everything else. No CSS. No scripts. No SVG paths. No base64 blobs. No tracking pixels. What remains is a clean, hierarchical representation of what the page actually shows and does.

Beyond the full accessibility tree, Gasoline provides a query_dom MCP tool that lets AI assistants query specific elements using CSS selectors:

query_dom(".error-message")
query_dom("form#login input")
query_dom("[role='alert']")

Instead of dumping the entire page and hoping the LLM finds the relevant piece, the assistant can request exactly what it needs. A targeted query might return 50 tokens instead of 200,000.

This changes the interaction model from “here’s everything, good luck” to “ask for what you need.”

Three reasons:

  1. Token waste. Raw HTML is mostly structural noise — closing tags, class attributes, data attributes, script contents. LLMs pay per token. You are paying to process markup that carries zero information about your bug.

  2. Signal dilution. Even when it fits in context, the LLM must locate a needle in a haystack. Error messages, form validation failures, and visible text get buried under layers of generated markup. Model attention is a finite resource.

  3. Fragility. innerHTML output changes with every framework update, CSS-in-JS hash rotation, and ad network injection. The representation is unstable and framework-dependent. The accessibility tree is stable because it represents semantics, not implementation.

Gasoline captures the accessibility tree directly from the browser via its Chrome extension. When an AI assistant calls the get_console_logs, get_accessibility_tree, or query_dom MCP tools, Gasoline returns structured, token-efficient data:

  • Accessibility tree: Full semantic structure of the page, typically 50-100x smaller than innerHTML
  • DOM queries: Targeted CSS selector queries returning only matching elements
  • Console logs: Errors and warnings already captured in real time, no DOM parsing needed

The result: your AI assistant gets the information it needs to debug your application without consuming the context window budget it needs to actually reason about the problem.

Terminal window
npx gasoline-mcp@latest

One command. Zero dependencies. Your AI assistant gets clean, structured browser context instead of raw HTML noise.

Learn more about DOM queries ->

How to Generate Playwright Tests from Real Browser Sessions

Gasoline MCP is a browser extension and local server that captures real-time browser telemetry and makes it available to AI coding assistants via Model Context Protocol. It is the only MCP browser tool that can generate Playwright tests from recorded browser sessions.

Writing Playwright tests by hand is slow. You have to inspect the page, figure out the right selectors, reconstruct the sequence of user actions, and write assertions that actually verify meaningful behavior. For a single form submission flow, you might spend 20 minutes wiring up page.locator() calls, fill() sequences, and waitForResponse() handlers.

Most teams know they should write regression tests after fixing a bug. In practice, the effort-to-value ratio kills it. The bug is fixed. The PR is open. Nobody wants to spend another half hour writing a test for something that already works. So the test never gets written, and six months later the same bug comes back.

Gasoline solves this by recording what actually happens in your browser. Every click, navigation, form fill, and network call is captured as structured data. When you are done, your AI assistant calls the generate_test MCP tool, and Gasoline produces a ready-to-run Playwright test file based on the real session.

There is no synthetic scenario construction. The test reflects exactly what happened in the browser.

Here is an example of what Gasoline generates after a session where you log in and update a user profile:

import { test, expect } from '@playwright/test';
test('update user profile after login', async ({ page }) => {
await page.goto('http://localhost:3000/login');
await page.locator('input[name="email"]').fill('[email protected]');
await page.locator('input[name="password"]').fill('test-password');
await page.locator('button[type="submit"]').click();
await page.waitForURL('**/dashboard');
await page.locator('a[href="/settings/profile"]').click();
await page.locator('input[name="displayName"]').fill('Updated Name');
const responsePromise = page.waitForResponse(
(resp) => resp.url().includes('/api/users/profile') && resp.status() === 200
);
await page.locator('button:has-text("Save")').click();
await responsePromise;
await expect(page.locator('[data-testid="success-toast"]')).toBeVisible();
});

That test covers navigation, form interaction, API response validation, and UI confirmation. Gasoline generates it from the captured session data, not from a template.

The highest-value time to write a test is immediately after fixing a bug. You have just reproduced the issue, identified the root cause, and verified the fix. The exact sequence of actions that triggers the bug is fresh.

With Gasoline, you fix the bug in your normal browser, and the session is already recorded. Ask your AI assistant to generate a regression test, and you get a Playwright file that encodes the exact reproduction steps plus the expected passing behavior. That bug is now permanently guarded.

This turns regression testing from a discipline problem into a workflow byproduct.

How does Gasoline MCP generate Playwright tests?

Section titled “How does Gasoline MCP generate Playwright tests?”

Gasoline captures browser events through its Chrome extension: navigations, clicks, form inputs, and network requests with their responses. This telemetry is sent to the local Gasoline server over a WebSocket connection. When your AI assistant calls the generate_test MCP tool, Gasoline translates the recorded event timeline into sequential Playwright actions, mapping DOM interactions to locator strategies and network activity to response assertions.

Gasoline captures page navigations, link clicks, button clicks, form field inputs, select changes, checkbox toggles, and full network request/response pairs including status codes and URLs. It also captures console output and JavaScript exceptions, which can inform assertion logic in the generated tests.

Chrome DevTools MCP, BrowserTools MCP, and Cursor MCP Extension all provide some level of browser observability to AI assistants. None of them offer test generation. They can surface console logs and network errors, but they cannot turn a browsing session into a runnable Playwright test. Gasoline is the only tool in the MCP ecosystem with this capability.

Terminal window
npx gasoline-mcp@latest

One command. No runtime dependencies. No accounts. See the full test generation guide for configuration options and advanced usage.

How to Give Cursor Access to Browser Console Logs

Cursor is a powerful AI code editor, but it has a blind spot: your browser. When your app throws an error at runtime, Cursor has no idea. You end up copying console output, screenshotting network tabs, and pasting fragments into chat — losing context every time.

Gasoline MCP is an open-source browser extension + MCP server that streams real-time browser telemetry to AI coding assistants. It connects Cursor directly to your browser so it can read console logs, network failures, and DOM state without you lifting a finger.

The Problem: Cursor Can’t See Your Browser

Section titled “The Problem: Cursor Can’t See Your Browser”

When something breaks in the browser, your workflow probably looks like this:

  1. Notice a blank page or broken UI
  2. Open Chrome DevTools
  3. Read through the console errors
  4. Copy the relevant ones
  5. Paste them into Cursor and explain what happened

By the time Cursor sees the error, you’ve already done the hard part — finding it. And you’ve stripped away surrounding context: the network request that failed before the error, the sequence of console warnings that preceded it, the state of the DOM.

Cursor needs raw, continuous access to what the browser is doing.

You could keep a DevTools window open and manually relay information to Cursor. But manual copy-paste has real costs:

  • Lost context. You copy one error but miss the failed API call that caused it.
  • Stale data. By the time you paste, the browser state has changed.
  • Friction. Every round trip between browser and editor breaks your flow.

Cursor supports MCP (Model Context Protocol) natively, which means any tool that speaks MCP can feed data directly into Cursor’s context. Gasoline uses this to bridge the gap.

Once connected, Cursor can query your browser for live telemetry:

Data TypeWhat Cursor Sees
Console logsAll console.log, console.warn, console.error output
JavaScript errorsUncaught exceptions with full stack traces
Network requestsURLs, status codes, timing, headers
Network bodiesRequest and response payloads (opt-in)
WebSocket messagesReal-time WS frame data
DOM stateQuery elements, read attributes, check visibility

This is not a snapshot. Gasoline captures events continuously, so Cursor can ask “what happened in the browser?” at any point and get the full picture.

Terminal window
npx gasoline-mcp@latest

This downloads a single Go binary (no Node.js runtime, no node_modules/). It starts a local MCP server on your machine.

Install the Gasoline extension from the Chrome Web Store. It connects to the local server automatically.

Open Cursor Settings, navigate to MCP, and add a new server with this configuration:

{
"mcpServers": {
"gasoline": {
"command": "npx",
"args": ["-y", "gasoline-mcp@latest"]
}
}
}

In Cursor’s chat or agent mode, just ask:

“Check the browser for errors”

Cursor will call Gasoline’s MCP tools, read the captured telemetry, and respond with what it finds — and often fix the issue in the same turn.

Real Workflow: React Dashboard Blank Screen

Section titled “Real Workflow: React Dashboard Blank Screen”

Without Gasoline (5 steps):

  1. See blank screen in browser
  2. Open DevTools, find TypeError: Cannot read properties of undefined (reading 'map') in console
  3. Switch to Network tab, notice the /api/dashboard request returned a 500
  4. Copy both the error and the failed request details
  5. Paste into Cursor, explain the situation, wait for a fix

With Gasoline (1 step):

Ask Cursor: “The dashboard page is blank. Check the browser and fix it.”

Cursor reads the console error and the failed network request simultaneously through Gasoline, identifies that the API returned a 500 causing an undefined .map() call, and adds a null check or error boundary — all in one turn.

Install the Gasoline Chrome extension and add the MCP server config to Cursor’s settings. Gasoline handles the connection between browser and editor over localhost. No accounts, no cloud services, no API keys.

Gasoline runs entirely on your machine. The server binds to 127.0.0.1 only and rejects non-localhost connections at the TCP level. It never makes outbound network calls. Sensitive headers (Authorization, Cookie) are stripped from captured network data by default. Request and response body capture is opt-in.

Terminal window
npx gasoline-mcp@latest

One command to give Cursor full visibility into your browser. For the complete Cursor integration guide, see the Cursor + Gasoline setup docs.

How to Debug Browser Errors with Claude Code Using MCP

Claude Code is powerful, but it has a blind spot: it can’t see your browser. When your frontend throws an error, you open DevTools, find the relevant console message, copy it, switch to your terminal, paste it, and hope you grabbed enough context. This is slow, lossy, and breaks your flow.

Gasoline MCP is an open-source browser extension + MCP server that streams real-time browser telemetry (console logs, network errors, exceptions, WebSocket events) to AI coding assistants like Claude Code, Cursor, Windsurf, and Zed. It closes the feedback loop between browser and AI — automatically.

Without browser access, Claude Code operates on incomplete information. A typical debugging cycle looks like this:

  1. You make a code change
  2. You reload the browser
  3. Something breaks
  4. You open DevTools, scroll through console output
  5. You copy the error, maybe the stack trace
  6. You paste it into Claude Code
  7. Claude asks a follow-up — back to DevTools

You lose context at every step. Stack traces get truncated. Network errors get missed entirely. You never think to check the WebSocket connection that silently dropped.

Gasoline connects your browser directly to Claude Code via MCP (Model Context Protocol). Once connected, Claude Code can:

  • Read console logs — errors, warnings, and info messages with full stack traces
  • See network failures — failed API calls with status codes, URLs, and timing
  • Inspect request/response bodies — see exactly what your API returned
  • Monitor WebSocket events — catch dropped connections and malformed frames
  • Query the DOM — inspect element state with CSS selectors
  • Generate Playwright tests — turn a real browser session into a reproducible test

Instead of you copying errors to Claude, Claude pulls what it needs directly.

How Do I Connect Claude Code to My Browser?

Section titled “How Do I Connect Claude Code to My Browser?”

Setup takes under 60 seconds.

Step 1: Start the server

Terminal window
npx gasoline-mcp@latest

Single Go binary. No Node.js runtime. No node_modules/. Zero dependencies.

Step 2: Install the Chrome extension

Grab it from the Chrome Web Store (search “Gasoline”). The toolbar icon shows Connected when the server is running.

Step 3: Add the MCP config

Add this to .mcp.json in your project root:

{
"mcpServers": {
"gasoline": {
"command": "npx",
"args": ["-y", "gasoline-mcp@latest"]
}
}
}

Restart Claude Code. The server starts automatically on every session.

Step 4: Ask Claude

Open your web app in Chrome and ask:

“What errors are in the browser?”

Claude Code calls Gasoline’s observe tool and gets back structured data — not a screenshot, not a blob of text, but parsed console entries with timestamps, levels, stack traces, and source locations.

What Browser Data Can Claude Code See Through Gasoline MCP?

Section titled “What Browser Data Can Claude Code See Through Gasoline MCP?”

Gasoline exposes five composite tools to Claude Code:

ToolWhat Claude Can Do
observeRead console errors, network requests, WebSocket events, Web Vitals, page info
analyzeDetect performance regressions, audit accessibility, diff sessions
generateCreate Playwright tests, reproduction scripts, HAR exports
configureFilter noise, manage log levels, set persistent memory
query_domInspect live DOM state using CSS selectors

When Claude calls observe with what: "errors", it gets back every console error from the active tab — structured, timestamped, and ready to act on. When it calls observe with what: "network", it sees every failed HTTP request with status codes, URLs, headers, and optionally full response bodies.

This is not a one-shot snapshot. Gasoline streams continuously. Claude sees errors the moment they happen.

Gasoline runs entirely on localhost. The server binary binds to 127.0.0.1 only, rejects non-localhost connections at the TCP level, and never makes outbound network calls. Authorization headers are stripped by default. Request/response body capture is opt-in.

No data leaves your machine. No accounts. No telemetry.

Terminal window
npx gasoline-mcp@latest

One command. Zero dependencies. Claude Code sees your browser in under a minute.

Full setup guide