Optibase MCP: Ask AI About Your A/B Tests in Plain English

You run A/B tests. You collect data. But getting answers from that data still means clicking through dashboards, filtering date ranges, and cross-referencing results across tests.

What if you could just ask?

Today, we're launching Optibase's MCP Server — a direct integration that connects your Optibase workspace to AI assistants like Claude and Cursor. Ask questions about your A/B tests, heatmaps, traffic, and conversions in natural language. Get answers from your actual data in seconds.

No dashboards. No exports. Just ask.

What Is MCP (and Why Should You Care)?

The Model Context Protocol (MCP) is an open standard that lets AI assistants connect directly to your tools and data. Think of it as a universal API for AI — instead of your assistant guessing or working from screenshots, it pulls live data from your actual Optibase workspace.

For CRO teams, this changes the workflow entirely. Instead of switching between tabs, building reports, and interpreting charts, you stay in one place and ask questions like you would ask a colleague who has all your data memorized.

What You Can Do With It

Once connected, your AI assistant has access to your full Optibase workspace. Here's what that unlocks:

Analyze A/B Test Results Instantly

Skip the dashboard. Ask directly:

  • "Show me all my active A/B tests"
  • "Which variant is winning in my pricing page test?"
  • "How has my hero section test performed over the last 30 days?"
  • "Which of my tests have enough data to declare a winner?"

The assistant pulls live results — conversion rates, visitor counts, statistical significance — and gives you a clear answer.

Get Content Recommendations From Your Data

The assistant can fetch your actual test pages and analyze what each variant looks like. Then it can suggest improvements based on the performance data:

  • "Fetch the page for my hero section test and tell me what each variant shows"
  • "Suggest copy improvements for the losing variant in my headline test"

This is where AI-powered A/B testing gets practical — not generating random ideas, but making recommendations grounded in your real test data.

Explore Traffic and Heatmap Data

No more digging through analytics tabs:

  • "What are my top 10 most visited pages?"
  • "Compare mobile vs desktop traffic on my landing pages"
  • "What's the click and scroll data for my homepage heatmap?"
  • "Break down traffic to /pricing by country"

Get Strategic Recommendations

This is the real unlock. Combine multiple data points and ask for high-level guidance:

  • "Give me a summary of all my running experiments"
  • "Analyze my pricing page test and recommend what I should do next"
  • "Which tests should I stop and which should keep running?"
  • "What patterns do you see across my recent test results?"

Instead of a dashboard that shows you numbers, you get an analyst that interprets them.

How to Set It Up (2 Minutes)

Claude Desktop

  1. Open Claude Desktop settings → MCP Servers
  2. Add a new server with the URL: https://my.optibase.io/api/mcp
  3. Log in and select your workspace when prompted
  4. Start asking questions

Cursor

  1. Open Cursor settings → MCP Servers
  2. Add a new server — Name: Optibase, Type: HTTP, URL: https://my.optibase.io/api/mcp
  3. Log in and select your workspace
  4. You're live

Any Other MCP-Compatible Client

Any client that supports the MCP protocol works. Same URL, same OAuth login flow.

Tips for Getting the Most Out of It

  • Ask naturally — you don't need to know tool names or API syntax. Just describe what you want.
  • Filter by date — say "last 7 days," "this month," or any specific range.
  • Segment your data — narrow results by country, device type, browser, or OS.
  • Chain questions — ask follow-ups to dig deeper into any result.
  • Ask for recommendations — the AI can analyze patterns across tests and suggest what to do next, not just report numbers.

Why This Matters for CRO Teams

A/B testing tools have always been good at collecting data. The bottleneck has been acting on it. Teams run tests, results sit in dashboards, and decisions get delayed because someone needs to pull the numbers, interpret statistical significance, and write up a recommendation.

MCP removes that bottleneck. Your test data becomes conversational — accessible to anyone on the team who can type a question. That means faster decisions, shorter test cycles, and more experiments shipped.

This is what AI-powered CRO actually looks like: not AI running your tests for you, but AI making your test data instantly useful.

Get Started

The MCP integration is available now on all Optibase plans. Connect your workspace in under two minutes and start asking questions about your A/B tests, heatmaps, and conversions.

Set up the MCP Server →

FAQ

What is the Model Context Protocol (MCP)?

MCP is an open standard that lets AI assistants like Claude and Cursor connect directly to external tools and data sources. It means your AI assistant can pull real data from Optibase instead of working from context you paste in.

Which AI assistants are supported?

Claude Desktop and Cursor are the primary supported clients. Any MCP-compatible client can connect using the same server URL and OAuth authentication flow.

Do I need a specific Optibase plan?

No — the MCP integration is available on all plans, including Free.

Is my data secure?

Yes. Authentication is handled via OAuth. The AI assistant can only access the workspace you explicitly authorize during setup.

Can the AI assistant modify my tests?

No. The MCP integration is read-only. It can query your data and provide analysis, but it cannot create, edit, or delete tests.

Optibase is the experimentation platform built for Webflow (and WordPress). Run A/B tests, split tests, and multivariate experiments with heatmaps and session recordings included — starting free. Try Optibase →