AI Safety Expert Pioneers LLM Security Through Red Team Assessments
Total Students
Total Students
Matteo Dora is a distinguished machine learning researcher leading the LLM safety team at Giskard, where he specializes in developing and implementing security measures for AI systems. As an instructor for DeepLearning.AI's "Red Teaming LLM Applications" course, he teaches developers how to identify and mitigate vulnerabilities in large language models. His expertise spans the intersection of ethics, safety, and security in AI applications, with particular focus on conducting red team assessments of generative AI systems. Prior to his current role, Dora worked as an academic researcher in neuroscience and applied mathematics, bringing a unique interdisciplinary perspective to AI safety. His work at Giskard has contributed to the development of open-source testing frameworks for machine learning models, helping organizations identify critical vulnerabilities such as prompt injection, sensitive information disclosure, and hallucination