Web Test: An Agentic Web Automation Testing Solution
Web Test in Agentic Mode lets you automate web journeys using plain English. You describe what you want to validate, and the agent opens the site, explores it visually, converts your intent into a step-by-step plan (actions + assertions), and then executes the flow while streaming live screenshots. It finishes with a detailed report containing outcomes, evidence, and logs—so the result is both clear and traceable.

Key Benefits
Review and edit before execution The agent proposes steps and assertions first. You can modify them—add, remove, or refine—before running the test.
Live visual trace Watch the execution as it happens with step-by-step screenshots and a timeline that supports click-to-jump navigation.
Evidence-backed assertions Each validation is tied to screenshots and logs, so pass/fail outcomes are explicit and explainable.
Rerun with changes Iterate quickly by adjusting steps or assertions and rerunning—no need to rebuild the entire test.
DeepProbe insights (optional) Include accessibility checks and web performance metrics directly in the report.
Export and share Download a PDF report and share results easily.
CI/CD integration Trigger web tests from your pipeline, poll for completion, and attach reports as build artifacts to gate releases.
Preconditions
Before running a Web Test, make sure:
The target website is reachable and behaving normally.
Your scenario is clear and unambiguous.
Any required test data is ready (CSV, if you’re running bulk tests).
How to Run a Web Test?
Step 1: Start a new test with a scenario

Go to the Web Test section and provide:
A clear natural-language prompt (what to do + what to validate)
The target URL
Optional test data (CSV upload for bulk runs)
You can also pick from suggested templates, such as:
Add-to-cart flow on Swadesh
Video playback test on MX Player
Flight discovery on Cleartrip
Step 2: Review the plan (optional but recommended)


If you choose the detailed planning option, the agent will generate a proposed plan with steps and assertions. You can then:
Accept : proceed with the proposed plan


Modify : adjust steps or assertions as needed, then submit


Once the plan is submitted—or if you skip planning—you’ll land on the execution screen.

Step 3: Visualise the execution
During execution, the agent:
Launches the URL in a real browser
Follows the approved step plan in order
Runs assertions after each step and marks pass/fail
Captures step-level screenshots as evidence
Streams updates to the live timeline until completion


Step 4 : Analyse the Report
What the report includes
Stats: total assertions, passed, failed.
Summary: AI‑generated overview of the run outcome and intent.
Overview: URL, triggered by, timestamps, duration, browser, viewport.
Step Sequence: The exact step plan (actions + assertions).
Assertions table: expected vs actual, pass/fail, linked screenshots.
Screenshots gallery: full visual trace of the run.
Test logs: step‑level logs and outcomes.
Export assets: Video Recording + PDF export.
DeepProbe (if enabled): accessibility + web performance results.
Network + console logs (if DeepProbe enabled): captured during execution.

Advanced Features
Bulk Test Execution
Upload a CSV with multiple data rows along with your prompt. The system generates appropriate test cases, runs them in parallel, and provides individual reports for each run.



Test Management
View all test executions in a centralised dashboard
Filter and search tests by name, status, or date
Re-run previous tests with a single click
Access detailed execution history

Recording and Playback
Watch video recordings of test executions
Review step-by-step screenshots
Analyze execution flow for debugging

Postconditions
After test execution:
A full report is generated
A video recording is available
Step-by-step screenshots are captured as evidence
Final status (success/failure) is clearly shown
Key execution metrics (assertion counts, pass/fail, duration) are summarized
Best Practices
Writing strong instructions
Be specific about actions (click, type, select, navigate)
Include expected outcomes in the prompt
Break complex flows into clear, logical steps
Managing test data
Use CSV for bulk execution
Keep formats consistent
Include varied data to improve coverage
Monitoring and improving runs
Review reports after each execution
Use failed steps to refine prompts and assertions
Track success rates over time to measure stability
Conclusion
Web Test in ratl.ai turns natural-language intent into reliable, vision-driven web automation. It removes scripting by generating executable steps and assertions, provides live visual evidence during runs, and closes with a comprehensive report—making web testing faster to create, easier to debug, and simple to share.
Last updated