At the beginning of 2025, Optibase was known as an A/B testing tool.
By the end of the year, it became something much bigger.
We expanded beyond testing. We added heatmaps to see how users interact with pages. We launched session recordings to spot friction. We introduced personalization to adapt experiences in real time. And we rebuilt the core product to make everything faster and easier to use.
This article breaks down everything we shipped in 2025 and how these updates move Optibase from a simple testing tool to a more complete conversion rate optimization platform.

A/B testing is powerful. It helps you compare ideas and choose the version that performs better.
But in practice, running experiments is rarely that simple.
Many teams struggle before they even launch a test. They are unsure what to test. They lack clear data. Or they spend too much time setting things up. Even when a winner is found, it is not always easy to understand why it won.
Testing alone does not solve these problems.
To improve conversion rates consistently, teams need three things. They need insight into user behavior. They need a simple way to launch experiments. And they need clear results they can trust and act on.
This is the context behind everything we built in 2025.
Instead of focusing only on creating better split tests, we focused on improving the full experimentation workflow. From understanding how users behave, to launching a test, to analyzing results, and finally implementing changes.
The updates we shipped this year reflect that broader view of optimization.
As Optibase grew, the product became more powerful. But with more power often comes more complexity.
Over time, small additions can make any tool feel heavier. More buttons. More settings. More places to click.
In 2025, we rebuilt both the Optibase dashboard and the Webflow app to simplify that experience.
The redesign was not about new colors or visual polish. It was about reducing friction in daily workflows.
Navigation became clearer. Test creation became faster. Important metrics became easier to spot. Instead of searching for data, teams can now see performance at a glance.
We also improved how tests are structured inside the platform. Variants, goals, and results are organized in a way that feels natural, even for teams that are new to experimentation.
Experimentation is rarely a one-person job.
Marketers create hypotheses. Designers build variants. Developers review changes. Leadership wants to see results. Without structure, this often leads to long message threads and shared links that get lost over time.
In 2025, we introduced Workspaces to bring collaboration directly into Optibase.
Instead of passing test links back and forth, teams can now invite members into a shared environment. Roles and permissions make it clear who can create, edit, or review experiments. All tests live in one organized space, which makes it easier to track what is running and what has already been completed.
This structure becomes even more important as experimentation matures inside a company. When multiple tests run across different pages and teams, visibility matters. Workspaces help prevent confusion, duplicate experiments, and misaligned decisions.
By improving collaboration, experimentation becomes part of a team’s regular workflow rather than an isolated task handled by one person.
Running a test is only half the job. The real value comes from understanding the results and making a decision.
Many experimentation tools overwhelm users with numbers. Charts, percentages, confidence intervals, and filters can quickly become confusing, especially for teams that do not run tests every day.
In 2025, we improved how analytics work inside Optibase to make results clearer and easier to interpret.
Key metrics are now easier to spot. Conversion data is presented in a way that highlights meaningful differences between variants. Filtering options allow teams to focus on specific segments without digging through complex reports.
We also improved the way results are structured, so it is easier to understand what happened, why it happened, and what to do next.
Clear analytics reduce hesitation. When teams trust the data, they move faster. They implement winners with confidence and turn insights into measurable growth.
Better testing does not stop at launching experiments. It depends on understanding the outcome and acting on it.
As teams run more experiments, speed becomes important.
Manually adjusting traffic splits. Watching results every day. Deciding when to stop a test. These tasks take time and attention.
In 2025, we introduced automation features to make experimentation more efficient.
AI traffic split allows Optibase to automatically distribute visitors between variants based on performance. Instead of keeping a fixed 50/50 split, traffic gradually shifts toward the better-performing version. This helps teams reach clearer results faster, without constant monitoring.
We also launched auto-stop for tests. When a clear winner emerges based on statistical confidence, the test can pause automatically. This prevents wasted traffic and reduces the risk of running experiments longer than needed.
Automation does not replace decision-making. It supports it. Teams still control their experiments, but routine adjustments and monitoring no longer require manual effort.
By reducing operational overhead, testing becomes easier to scale across more pages and more teams.
One of the biggest barriers to experimentation is not analysis. It is setup.
Even experienced teams can miss a step. A goal is not connected. A variant is not configured correctly. Traffic is split in a way that does not match the hypothesis.
Small mistakes lead to unreliable results.
In 2025, we introduced the Test Wizard to guide teams through the entire setup process step by step.
Instead of jumping between screens, users move through a structured flow. First, define the variants. Then, set goals. Then, configure traffic and targeting. Each step builds on the previous one.
This reduces errors and lowers the learning curve for new users. It also speeds up experienced teams who want to launch tests quickly without second-guessing their configuration.
A/B testing should feel structured, not fragile. The Test Wizard brings consistency to how experiments are created, which improves the quality of the results.
A/B testing compares versions of a page. Every visitor sees either variant A or variant B.
Personalization works differently.
Instead of showing the same experience to everyone, you adapt the page based on who the visitor is.
In 2025, we expanded personalization inside Optibase with more advanced targeting rules. Teams can now tailor content based on:
These rules can be combined in any way. This makes it possible to create highly targeted experiences without duplicating pages or guessing.
We also introduced reusable rules. Instead of setting up the same audience conditions again and again, you can now save targeting rules in a shared library and apply them across personalizations and tests in just a few clicks.
Personalization does not replace A/B testing. It builds on it.
You can still run experiments. But now, you can also decide which experience should appear for different audiences. This makes optimization more precise and more aligned with real user behavior.
With personalization, Optibase moves beyond comparing versions and toward shaping experiences in real time.
Before you change a page, you need to understand how people use it.
Numbers tell you what happened. Behavior tools help you see why.
In 2025, we introduced heatmaps and session recordings inside Optibase.
Heatmaps show where users click, how far they scroll, and which parts of a page get attention. Instead of guessing whether a section is being ignored, you can see it clearly. If users never scroll to your pricing table, that is not a testing problem. It is a visibility problem.
Session recordings take this one step further. You can watch real user sessions to identify friction points. Where do users hesitate? Where do they stop? Where do they drop off?
These insights help teams form stronger hypotheses before launching an A/B test. Instead of testing random ideas, you can test changes based on observed behavior.
By combining behavioral data with experimentation, optimization becomes more structured. You understand the problem first. Then you design a test to solve it.
This closes the loop between insight and action.
Not every update is a headline feature.
Throughout 2025, we also shipped dozens of smaller improvements that make day-to-day experimentation smoother and more reliable.
We introduced activity logs so teams can clearly see what changes were made and when. This adds transparency, especially in collaborative environments.
Analytics were refined further, with improvements to filtering, reporting structure, and result clarity. Small adjustments here reduce confusion and make decision-making faster.
We strengthened bot detection to prevent skewed results from automated traffic. Cleaner data leads to more trustworthy outcomes.
IP blocking was added to help teams exclude internal traffic and keep experiments accurate.
Scheduled tests now allow experiments to start and stop at predefined times, making it easier to align testing with campaigns, launches, or seasonal events.
Individually, these updates may seem minor. Together, they improve stability, data integrity, and workflow consistency.
A strong experimentation platform is not only built on big features. It depends on reliable details that teams can trust every day.
When we look back at 2025, the biggest change is not a single feature.
It is the way Optibase evolved.
What started as a focused A/B testing tool is now a broader experimentation and CRO platform. Teams can understand user behavior, personalize experiences, automate decisions, collaborate more effectively, and analyze results with greater clarity — all in one place.
Each update this year was shaped by real workflows. Conversations with users. Feedback from teams running experiments every week. Problems we saw repeated across different industries.
Experimentation is not about running more tests. It is about making better decisions.
The features shipped in 2025 move Optibase closer to that goal. They reduce friction, improve insight, and make optimization part of how teams build, not just something they do occasionally.
And this is only the foundation.