Understanding Testing Modes
Overview
ratl.ai provides two distinct modes to help teams optimize their software testing workflows:
Assisted Mode – Ideal for modular testing where users have full control.
Autonomous Mode – Designed for end-to-end automation with minimal manual setup.
Assisted Mode
Use Case: Best for QA teams and developers who want to test APIs, Web apps, Load, and Performance as separate tasks.
Setup Flow
Mode Selection User selects "Assisted Mode" from the initial screen.
Workspace Setup(if first user to the organisation)
Name the workspace
Define organization URL
(Optional) Add a workspace description
Enable auto-join for users with a specific domain email
Workspace Discovery
Shows existing workspaces within the org
Option to join or create a new one
Team Invitation
Invite teammates by entering email addresses
Option to skip this step
User Role Setup
Enter user name
Choose from roles: QA, Developer, Product Manager, Tech Lead, CXO, Designer, DevOps, TechOps, SiteOps, Other
Test Type Selection Final step before reaching the dashboard. Choose from:
API Functional Testing
API Integration (E2E) Testing
Load & Performance Testing
Web Application Testing
Once a test type is selected, the user is directed to the dashboard to begin testing.
Autonomous Mode
Use Case: Built for teams that prefer a hands-off, intelligent testing flow using context-driven automation.
Setup Flow
Mode Selection User selects "Autonomous Mode" from the initial screen.
Workspace Setup Same as Assisted Mode (workspace name, org URL, invite team, join existing workspace, and set user role).
SDLC Context Collection The user selects relevant tools and practices from their software development lifecycle (SDLC). Each selected item contributes to a projected context score:
Requirements / User Stories Tools like JIRA, ClickUp, Notion
Design/UX Artifacts Tools like Figma, Miro, Whimsical
API Documentation Tools like Swagger, Postman, Stoplight, Insomnia
Code Repository Tools like GitHub, GitLab, Bitbucket
Code Quality / Static Analysis Tools like SonarQube, ESLint, Codacy
AI Code Assistants Tools like GitHub Copilot, Cursor, TabNine, CodeWhisperer
CI/CD Tools like Jenkins, GitHub Actions, CircleCI
Analytics Tools Tools like GA, Mixpanel, Heap, Amplitude
Test Environment Tools like Docker, Kubernetes
Monitoring & APM Tools like New Relic, Datadog, Prometheus, Sentry
Platform Integration The user further integrates their tool stack by selecting platforms used across project management, testing, collaboration, and environments.
Context Score Assessment User receives a Projected Context Score (0–100) based on SDLC coverage and tool integrations. Score factors include:
Test Planning
Code Quality
API Documentation
CICD Context & Readiness
Test Environment Context
Each section is scored to help identify gaps and optimize automation readiness.
End of Setup
After the context score is shown, users are ready to go autonomous and will be directed to the dashboard.
Summary of Differences
Control
Manual test selection
Fully automated testing flow
Integration Required
Minimal
Extensive (SDLC-wide)
Best For
Modular, focused testing
Holistic, continuous testing
Setup Depth
4 steps
Up to 6 steps including integrations and scoring
Outcome
Manual dashboard launch
Auto-generated test strategy & dashboard
For more, visit: Learn more about Assisted and Autonomous Modes (as shown in the UI).
Last updated