Generative AI Tester

Remote
Full Time
Experienced
- 7+years of Hands-on experience in testing Generative AI models including text, image, and other content generation outputs.
- Expertise in creating and executing test strategies, test plans and cases specific to generative AI models, including language models, image generators, and other AI applications that address the unique challenges of Generative AI systems.
- Strong understanding of AI/ML concepts, including model training, validation, deployment, and continuous monitoring.
- Proficiency in testing large language models (LLMs) such as GPT, BERT, and similar, focusing on output accuracy, context retention, and consistency.
- Expert-level knowledge in natural language processing (NLP) techniques and the ability to test NLP-driven applications.
- Experience with AI/ML testing frameworks and tools such as TensorFlow, PyTorch, Hugging Face or custom AI testing frameworks.

- Strong familiarity with data validation and testing, ensuring that datasets used for training and testing are accurate, relevant, and unbiased.

- Test for potential biases within AI models by analysing model output across different demographics and data segments.
-Proficient in defining KPIs and metrics for generative AI model testing.
-Conduct model evaluation using relevant performance metrics, such as BLEU, ROUGE, and perplexity for language models.
-Validate model output for accuracy, coherence, and relevance, ensuring that the models align with business and user expectations.
- Expertise in query optimization and data processing to validate AI model performance and output efficiency.
- Perform functional, load, and stress tests on models to validate their accuracy, scalability, and responsiveness under varying conditions.
- Understanding of cloud-based AI/ML deployment and experience in testing AI models deployed on cloud platforms like AWS, Azure, or GCP.
- Proficiency in API testing for AI/ML applications, ensuring seamless integration and accurate data flow between components.
- Experience in using test automation tools for AI/ML testing, including custom automation scripts tailored to AI models.

- Proficiency in programming languages like Python or Java for developing and executing automated test scripts for AI models.
- Knowledge of Continuous Integration/Continuous Deployment (CI/CD) pipelines in the context of AI/ML model deployment and testing.

-Perform continuous testing for model performance and drift post-deployment, identifying areas where model retraining may be required.
- Strong stakeholder management and communication skills to effectively collaborate with cross-functional teams, including data scientists, developers, ML Engineers, Model Ops team and product managers to provide feedback on model performance and potential improvements.

-Familiarity with Model Ops tools and practices for production-level AI testing.
- Strong analytical, problem-solving, and reporting skills with the ability to identify and resolve issues related to AI model performance and integration.

-Excellent communication skills, able to present test results and model evaluations to technical and non-technical stakeholders.
- Exposure and experience in test management tools like JIRA, TestRail, or similar for end-to-end test management.

 
Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*