SDET & Automation Engineer · ISTQB Certified · 14+ Years
I build automation frameworks, own QA strategy, and integrate testing into CI/CD pipelines. Over the last year I've been applying LLMs to QA — autonomous crawling, self-healing agents, and LLM evaluation frameworks.
Experience
Sept 2016 – Mar 2026
9.5 years
Career Technology · SaaS
BOLD Technologies Ltd
Principal QA Engineer / QA Lead — Automation
Sept 2011 – Sept 2016
5 years
IT Services · Multi-domain
Optimus Information Inc
Senior Software Test Engineer
Skills
Automation
Languages
CI/CD & Tools
AI in QA
Process
Certifications
Projects
Personal AI exploration built alongside 14+ years of professional QA — applying LLMs where they genuinely reduce testing overhead.
Systematic, multi-judge evaluation of LLM outputs with hallucination detection & regression tracking
Replaces subjective "does this look right?" prompt testing with structured, repeatable LLM evaluation. YAML test suites run against Groq-hosted models, scored by parallel LLM judges with hallucination detection and regression tracking — visualised in a live Streamlit dashboard you can run directly from the browser.
Requirements → approved test plans → AI execution → Jira triage — with a human in the loop
Takes unstructured requirements and converts them into executed, reported, and triaged test cases end to end. A RequirementAnalyst agent generates JSON test plans; a Streamlit portal lets humans approve before any test runs. Failures trigger AI bug triage and auto-open Jira tickets with screenshots.
Ask questions across your QA docs — test cases, bug reports, SRS — and get grounded answers with source attribution
A production-grade Retrieval-Augmented Generation system built from scratch to understand every layer of RAG. Documents are chunked, embedded locally (no GPU needed), and stored in Qdrant Cloud. At query time the user's question is embedded, matched against stored vectors, and the top chunks are passed to Groq for a grounded answer. No black boxes — every component is explicit and replaceable.
Zero-script bug detection & test case generation via LLM-guided browser crawl
An AI testing agent that crawls any web application, detects bugs, and auto-generates test cases without a single manually written selector. The LLM decides what to explore; Playwright executes it. Supports Groq (cloud) and Ollama (fully local).
Get in Touch