**Peer Review Journal ** DOI on demand of Author (Charges Apply) ** Fast Review and Publicaton Process ** Free E-Certificate to Each Author

Current Issues
     2026:2/2

International Journal of Engineering and Computational Applications

ISSN: (Print) | 3107-6580 (Online) | Impact Factor: 8.23 | Open Access

Trustworthy Automation: Explainable AI for Secure Automated Software Testing

Full Text (PDF)

Open Access - Free to Download

Download Full Article (PDF)

Abstract

Automated software testing has become a core pillar of modern software engineering due to increasing demands for rapid delivery, high reliability, and scalable quality assurance. The integration of Artificial Intelligence (AI) into testing processes has further enhanced automation by enabling intelligent test case generation, predictive defect detection, adaptive prioritization, and proactive fault prevention. However, most AI-driven testing solutions operate as opaque black-box systems, limiting transparency, accountability, and trust. This lack of explainability poses significant challenges for debugging, validation, regulatory compliance, and stakeholder confidence.
Explainable Artificial Intelligence (XAI) addresses these challenges by providing human-understandable explanations for AI-driven decisions. This paper investigates the role of XAI in automated software testing, with a particular focus on CI/CD and DevSecOps environments. We analyze key opportunities, including enhanced developer trust, improved debugging and root cause analysis, smarter test optimization, and strengthened compliance support. At the same time, we identify critical challenges such as the trade-off between model performance and interpretability, the absence of standardized metrics for explanation quality, integration complexity within CI/CD pipelines, and potential security risks arising from over-disclosure.
Based on a structured review of recent academic literature and industry practices, this study presents a comprehensive perspective on how XAI can transform automated software testing into a more transparent, trustworthy, and responsible discipline. The findings suggest that explainability should be treated as a foundational design principle rather than an optional feature for future AI-driven testing frameworks.

How to Cite This Article

Chandra Shekhar Pareek (2026). Trustworthy Automation: Explainable AI for Secure Automated Software Testing . International Journal of Engineering and Computational Applications (IJECA), 2(1), 01-09. DOI: https://doi.org/10.54660/.IJECA.2026.2.1.01-09

Share This Article: