SDET & Automation Engineer  ·  ISTQB Certified  ·  14+ Years

Test automation at scale,
now with AI.

I build automation frameworks, own QA strategy, and integrate testing into CI/CD pipelines. Over the last year I've been applying LLMs to QA — autonomous crawling, self-healing agents, and LLM evaluation frameworks.

ISTQB Certified Playwright · Selenium Python · Java Jenkins · GitHub Actions LangChain · Groq API Testing Agile · Shift-Left
LinkedIn ↗ GitHub ↗ Email me

Experience

Sept 2016 – Mar 2026

9.5 years

Career Technology · SaaS

BOLD Technologies Ltd

Principal QA Engineer / QA Lead — Automation

  • Led QA automation for a mid-size Agile team across multiple squads — owned regression scope, automation candidacy decisions, and sprint quality metrics per release cycle.
  • Built and maintained Selenium (Java) and Playwright (Python) automation frameworks from scratch; integrated into Jenkins and GitHub Actions CI/CD pipelines with full reporting.
  • Introduced shift-left testing practices — brought QA into requirements and design phases, reducing defect escape to production.
  • Automated regression coverage across resume builder, cover letter, and career tool modules — significantly reduced manual regression effort per sprint.
  • Validated REST API services using Postman; designed test plans covering functional, integration, and regression scenarios.
  • Standardised defect triage process across teams in Jira; built release health dashboards used by stakeholders for go/no-go decisions.
  • Managed TestLink test repository — maintained 500+ test cases across modules with full requirements traceability.
  • Mentored junior QA engineers; represented QA across sprint planning, retrospectives, and release demos.
SeleniumPlaywrightJavaPythonJenkinsGitHub ActionsPostmanJiraTestLinkAgile / Scrum

Sept 2011 – Sept 2016

5 years

IT Services · Multi-domain

Optimus Information Inc

Senior Software Test Engineer

  • Functional, regression, integration, and API testing across web and mobile platforms for multiple client products.
  • REST API validation via Postman; defect triage and requirements clarification in Agile sprints.
  • Supported automation initiatives — maintained and extended scripts within existing frameworks.
Manual TestingPostmanJiraAgile

Skills

Automation

  • Playwright
  • Selenium WebDriver
  • pytest / JUnit
  • pytest-xdist (parallel)
  • Page Object Model

Languages

  • Python
  • Java

CI/CD & Tools

  • Jenkins
  • GitHub Actions
  • Allure Reports
  • TestLink
  • Jira
  • Postman / Rest Assured

AI in QA

  • LangChain LCEL
  • Groq · Ollama · SambaNova
  • LLM-as-Judge evaluation
  • Self-healing agents
  • Prompt engineering

Process

  • Agile / Scrum
  • Shift-left testing
  • Risk-based test planning
  • Defect triage governance
  • Sprint ceremonies

Certifications

  • ISTQB Foundation

Projects

Personal AI exploration built alongside 14+ years of professional QA — applying LLMs where they genuinely reduce testing overhead.

VerdictAI — LLM Evaluation Framework
AI Project

Systematic, multi-judge evaluation of LLM outputs with hallucination detection & regression tracking

Replaces subjective "does this look right?" prompt testing with structured, repeatable LLM evaluation. YAML test suites run against Groq-hosted models, scored by parallel LLM judges with hallucination detection and regression tracking — visualised in a live Streamlit dashboard you can run directly from the browser.

  • YAML-driven test suites — hallucination, safety, format, and RAG evaluation
  • Multi-judge consensus (Groq + Cerebras) with disagreement detection
  • Hallucination detection via claim extraction + batched NLI verification (~60% fewer tokens)
  • Relevance scoring using sentence-transformers cosine similarity
  • Regression tracking with automatic score drop detection across runs
  • Run live evaluations from the browser — no CLI required
  • Jira auto-ticket creation on consecutive failures
PythonLLM EvaluationGroqCerebrasLangChainStreamlitSQLiteAllure
Sentinel QA — Human-in-the-Loop AI Test Framework
AI Project

Requirements → approved test plans → AI execution → Jira triage — with a human in the loop

Takes unstructured requirements and converts them into executed, reported, and triaged test cases end to end. A RequirementAnalyst agent generates JSON test plans; a Streamlit portal lets humans approve before any test runs. Failures trigger AI bug triage and auto-open Jira tickets with screenshots.

  • RequirementAnalyst agent: plain-text requirements → structured JSON test plans via LangChain LCEL
  • Human-in-the-Loop Streamlit portal — approval gate before execution
  • LLM provider failover: SambaNova → Groq → Ollama (one config line)
  • Parallel execution via pytest-xdist; Allure reports with per-failure AI analysis
  • ActionRegistry replaces page-object models — Playwright driven by AI decisions
PythonPlaywrightLangChain LCELSambaNovaGroqFastAPISQLitepytest-xdistJira API
QA Knowledge Bot — RAG Pipeline
Live Demo

Ask questions across your QA docs — test cases, bug reports, SRS — and get grounded answers with source attribution

A production-grade Retrieval-Augmented Generation system built from scratch to understand every layer of RAG. Documents are chunked, embedded locally (no GPU needed), and stored in Qdrant Cloud. At query time the user's question is embedded, matched against stored vectors, and the top chunks are passed to Groq for a grounded answer. No black boxes — every component is explicit and replaceable.

  • LangChain RecursiveCharacterTextSplitter — splits on paragraphs → sentences → words for clean chunk boundaries
  • sentence-transformers (all-MiniLM-L6-v2) — free, CPU-only, 384-dim embeddings
  • Qdrant Cloud vector DB — cosine similarity search with source filename + score in every result
  • Groq (llama-3.1-8b-instant) — ~1s grounded answers; returns "Not found in QA docs" when context is absent
  • Streamlit chat UI with session history, collapsed source expander, and in-app re-indexing
PythonRAGLangChainsentence-transformersQdrant CloudGroqStreamlit
AI Autonomous Testing Framework
AI Project

Zero-script bug detection & test case generation via LLM-guided browser crawl

An AI testing agent that crawls any web application, detects bugs, and auto-generates test cases without a single manually written selector. The LLM decides what to explore; Playwright executes it. Supports Groq (cloud) and Ollama (fully local).

  • Auto-login detection — Cloudflare, multi-step login, SSO, cookie banners
  • Signal-gated bug detection — LLM only fires on real errors (console, XHR, DOM)
  • Self-healing actions with up to 5 fallback strategies on selector failure
  • API testing layer captures all XHR/fetch calls during crawl and validates each endpoint
  • Generates contextual test cases per page saved to Excel; Allure reports auto-open
PythonPlaywrightGroqOllamallava (vision)LLM agentsAllure

Get in Touch

anantjain99@gmail.com LinkedIn github.com/anant-pw 9868021293 · Noida, UP