← Back to jobs

Vice President, Cybersecurity Engineer - AI Security

Ares Management
FULL_TIME Remote · US New York, NY, New York, US USD 250000–270000 / month Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Over the last 20 years, Ares’ success has been driven by our people and our culture. Today, our team is guided by our core values – Collaborative, Responsible, Entrepreneurial, Self-Aware, Trustworthy – and our purpose to be a catalyst for shared prosperity and a better future. Through our recruitment, career development and employee-focused programming, we are committed to fostering a welcoming and inclusive work environment where high-performance talent of diverse backgrounds, experiences, and perspectives can build careers within this exciting and growing industry. Job Description REPORTING RELATIONSHIPS Reports to: Principal, Cybersecurity Engineering Manager Direct Reports: None Position Summary Statement We are looking for a collaborative and forward‑thinking Cybersecurity Engineer to help lead the design and implementation of our AI Security Program. In this role, you will work closely with teams across the company to ensure our use of AI—large language models (LLMs), ML pipelines, commercial AI platforms, and AI‑enabled applications—is secure, responsible, and aligned with our organizational values. You will also contribute broadly to cloud, application, and platform security initiatives. You’ll partner with Data Security, Engineering, Architecture, Legal/Compliance, and top-tier business stakeholders to ensure our AI adoption is responsible, resilient, and secure by design. This is an opportunity to define foundational controls for a rapidly evolving domain. We are looking for you to bring curiosity, a security engineering foundation, and the ability to work with diverse stakeholders. We value diverse backgrounds, perspectives, and experiences, and we are committed to building a team where everyone feels they belong. We especially encourage candidates from underrepresented communities in cybersecurity and technology to apply. Our interview process focuses on problem-solving ability, practical skills, and collaborative mindset. Detailed Responsibilities/Duties Lead AI Security Program Development Develop and maintain an AI security strategy and control framework that is transparent, practical, and accessible to teams across the organization. Partner with Data Science, ML Engineering, Platform Engineering, Architecture, Legal, and Risk teams to build secure patterns for LLMs, model pipelines, vector databases, and AI‑integrated applications. Implement safeguards against AI‑specific risks such as prompt injection, data exposure, model exfiltration, misalignment, and unsafe output behaviors. Contribute to policies and guidelines that promote safe and ethical AI usage. Security Engineering & Cross ‑ Functional Support Build secure-by-default tooling, templates, middleware, and reference architectures that make secure AI development easier for teams. Contribute to broader security engineering initiatives across cloud, application, and platform domains. Integrate AI safety and security controls into CI/CD pipelines and production environments. Work with engineering teams to implement identity, data protection, and workload isolation controls for AI systems. AI Risk Assessment, Threat Modeling & Governance Facilitate inclusive, collaborative threat modeling sessions for AI/ML systems, encouraging contributions from technical and non-technical stakeholders. Support governance efforts using frameworks such as NIST AI RMF and OWASP Top 10 for LLMs. Help teams understand AI risks in clear, actionable language and integrate controls into their workflows. Telemetry, Monitoring & Response Implement monitoring for AI behavior, model integrity, data access, and user interaction patterns. Develop dashboards and key metrics that help teams understand the health, safety, and security of AI systems. Partner with Cyber Platform Engineering and Security Operations to build incident response playbooks for AI‑related events (e.g., data contamination, prompt‑based attacks, anomalous model outputs). Cross‑Team Collaboration & Enablement Serve as an approachable and trusted partner for engineering, data, product, and risk teams. Provide guidance in a supportive, inclusive manner, meeting teams where they are. Help build a culture of safe and responsible AI use through documentation, education, workshops, and community of practice efforts. Create paved roads and reusable assets that reduce friction for teams adopting secure AI technologies. What Success Looks Like (12–18 Months) A practical, well‑documented AI security framework t