π A language-agnostic API & AI/ML E2E testing framework
Β·2 min readΒ·
MLOpsE2ETestingPlaywrightAutomationAPIDevOpsAIEngineeringCICDSoftwareEngineeringTechLeadershipTestingQualityEngineering
Over the past few weeks, I've built a language-agnostic API & AI/ML E2E testing framework that plugs seamlessly into any Python, .NET, or AI/ML project.
The idea is simple but powerful:
- π Drop a JSON use-case
- π Run multi-step tests automatically
- π Get full observability + SLA enforcement
Just define your use cases and run. Works across APIs, ML workflows, backend β and extensible to frontend.
π One thing I learned while building this: Success isn't just about passing tests β it's about making testing accessible. The goal was to enable QA, PMs, and Data Scientists to validate complex AI systems and ML pipelines without needing deep coding expertise.
β¨ Why This Matters β Problems It Solves
- π Fragmented E2E testing β Different frameworks per project, duplicated effort
- β‘ Slow validation β Manual testing delays releases and ML iterations
- π Lack of observability β Hard to debug across APIs + ML workflows
- π Hard to scale β Every new project = new test setup
π Stack & Technologies
- π» Playwright (
@playwright/test) - β‘ TypeScript / Node.js
- π§© JSON-driven DSL (
e2e-core) - π Auth: Bearer Β· API Key Β· Basic Β· mTLS
- π Validation:
ajv(JSON Schema) Β·zod - π Observability: Structured logs Β· Traces Β· SLA enforcement
- π CI/CD: GitHub Actions + environment-based config
- π€ MLOps: Supports ML pipelines, batch jobs, model endpoints
π Why Not Just pytest?
| pytest | This framework | |
|---|---|---|
| Test authoring | Python code | JSON use-case files |
| Execution | Sequential by default | Parallel, isolated |
| Stack coverage | Python | Python, .NET, ML, APIs |
| Observability | Plugin-dependent | Built-in logs + traces + SLA |
| CI/CD | Manual setup | Plug-and-play |
- β‘ JSON-driven β No need to write test code for flows
- π Parallel execution β Fast, isolated runs
- π Cross-stack β One framework for Python, .NET, ML
- π Built-in observability β Logs, traces, SLA
- π CI/CD ready β Plug-and-play into pipelines
π‘ What This Unlocks
- β± Faster validation of APIs + ML workflows
- π‘ Reduced manual testing effort
- π One consistent framework across teams and projects
This isn't just testing. It's a plug-and-play reliability engine for modern API and AI/ML systems.