API Functional Tests: A Comprehensive Solution for Automated Testing
Last updated
Last updated
The Functional Test feature in ratl.ai provides an automated and comprehensive solution for generating, executing, and debugging test cases based on your API’s specifications. By leveraging this feature, you can ensure your APIs meet design and functional requirements without the overhead of manual testing.
This feature supports curl commands and Postman collections, enabling efficient test generation.
Automated Test Generation: Generate test cases based on API specifications automatically.
Multi-Format Support: Supports input from curl commands and Postman collections.
Execution Tracking: Run and monitor test cases for execution status and response body.
Results Analysis: Get detailed insights into both successful and failed tests, helping identify issues.
Debugging Support: Quickly debug failed test cases using logs and error messages.
Before using the Functional Test feature, make sure:
The API specifications are documented and accessible in curl, Postman, or OpenAPI formats.
You have access to ratl.ai and the appropriate permissions to generate and run tests.
1. Upload API Specification
Navigate to the Functional Test page.
Click on Add Test Suite button.
Choose one of the following options to provide your API specifications:
Upload a Postman collection file in v2.1 format
Paste a curl command or Postman collection in JSON format, along with a name for your collection.
Once uploaded, click on Generate.
Tip: Ensure that the API specifications are correctly formatted to avoid errors in test generation.
ratl.ai will generate test cases based on your input.
A summary of the generated test suites will be displayed for review.
Test case generation typically takes 1-3 minutes.
Note: The complexity of the API specifications may affect the test case generation time.
Review the generated test cases, which include test case description, url endpoint, request body, headers, query parameters and expected status code.
Modify or add test cases if necessary to cover additional scenarios or edge cases.
Example: If a specific edge case is not covered, you can add new test cases or adjust existing ones.
Initiate the test execution.
Track each test case’s execution status and responses directly in ratl.ai.
Review the results after execution to see which tests passed or failed.
Export results at both the project or suite levels for further analysis.
Pro Tip: You can generate reports for stakeholders by exporting the results.
For any failed tests, use the provided logs and error messages to investigate potential issues.
Make necessary corrections to the API or adjust the test cases accordingly.
Re-run the tests to validate that the changes have resolved the issues.
ratl.ai will generate a new report, confirming the fixes or highlighting any persistent problems.
Your API is thoroughly tested against its documented specifications.
All discrepancies between the API’s expected and actual behavior are identified and resolved.
Detailed reports are generated and can be exported for future reference.