This article, presented by Andra Lezza from Sage and OWASP, provides an in-depth analysis of securing AI assistants, or copilots, especially concerning sensitive data protection. It highlights the increasing integration of AI assistants in workflows and the critical need for data security across the entire pipeline: ingestion, transformation, model training, deployment, and monitoring. The presentation compares common OWASP Top 10 web vulnerabilities with those specific to LLMs, noting that many traditional security principles, such as least privilege and input sanitization, remain relevant. It introduces new LLM risks like system prompt leakage, vector and embedding weaknesses, and misinformation. The article then explores two copilot architectures—independent and integrated—outlining their unique threats and controls, such as information disclosure, supply chain attacks, and prompt injection. It emphasizes defense-in-depth strategies, continuous monitoring, and the use of tools like safetensors, guardrails, and templates to mitigate risks, concluding that robust security implementation is paramount, regardless of architectural choice.


