← Back to jobs

Senior Associate, National Security-Cyber Security Governance

Alvarez & Marsal Deutschland GmbH
PART_TIME Remote · US Seattle, WA, United States, WA, US Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Description Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 entrepreneurial, action and results-oriented professionals in over 40 countries. We take a hands‑on approach to solving our clients' problems and assisting them in reaching their potential. Our culture celebrates independent thinkers and doers who positively impact our clients and shape our industry. The collaborative environment and engaging work—guided by A&M's core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity—are why our people love working at A&M. The Team At A&M you will have the opportunity to work with a diverse team of supportive and motivated professionals that love to share their knowledge and depth of industry experience with others. A&M's Disputes and Investigations practice comprises professionals from a wide range of backgrounds, who bring and share their deep expertise in conducting investigations and delivering expert witness reports. We have an inclusive developmental environment where everyone has the opportunity to learn and grow. Our culture is characterized by openness and entrepreneurial thinking, with a foundation of mutual respect and high-quality standards for our work. We strive to remove bureaucracy in favor of recognizing effort and results through advancement opportunities and a motivating performance-based reward structure. How you will contribute With the rapid adoption of AI technologies and evolving regulatory landscape, demand for AI-focused security analysis and compliance expertise is growing exponentially. Our team supports organizations, investors and counsel in identifying, assessing, and mitigating risks associated with AI system deployment, algorithmic bias, data privacy, and model security. We focus on implementing secure AI/ML pipelines, establishing AI governance frameworks, conducting model risk assessments, and ensuring compliance with emerging AI regulations. Our approach integrates traditional cybersecurity with AI-specific security controls, leveraging automated testing, model monitoring, and adversarial robustness techniques. The team serves as trusted advisors to organizations navigating AI regulatory requirements, security certifications, and responsible AI implementation. Responsibilities Lead technical teams in executing AI security assessments, model audits, and compliance reviews related to AI Act (EU), NIST AI Risk Management Framework, ISO/IEC 23053/23894, and emerging AI governance standards. Develop AI risk assessment methodologies and implement continuous monitoring solutions for production ML systems. Design and implement secure AI/ML architectures incorporating MLOps security practices, including model versioning, data lineage tracking, feature store security, and secure model deployment pipelines. Integrate security controls for Large Language Models (LLMs), including prompt injection prevention, output filtering, and embedding security. Conduct technical assessments of AI/ML systems using tools such as: AI Security Tools: Adversarial Robustness Toolbox (ART), Foolbox, CleverHans for adversarial testing MLOps Platforms: MLflow, Kubeflow, Amazon SageMaker, Azure ML, Google Vertex AI Model Monitoring: Evidently AI, Fiddler AI, WhyLabs, Neptune.ai for drift detection and explainability LLM Security: Guardrails AI, NeMo Guardrails, LangChain security modules, OWASP LLM Top 10 tools Privacy-Preserving ML: PySyft, TensorFlow Privacy, Opacus for differential privacy implementation Implement AI compliance and governance solutions addressing: Regulatory Frameworks: EU AI Act, Canada's AIDA, US AI Executive Orders, Singapore's Model AI Governance Framework Industry Standards: ISO/IEC 23053, ISO/IEC 23894, IEEE 7000 series, NIST AI RMF Sector-Specific Requirements: FDA AI/ML medical device regulations, GDPR Article 22 (automated decision-making), SR 11-7 model risk management Develop and execute penetration testing specifically for AI systems, including: Model extraction attacks and defenses Data poisoning vulnerability assessments Membership inference and model inversion testing Prompt injection and jailbreaking assessments for LLMs Backdoor detection in neural networks Program and deploy custom security solutions using: Languages: Python (PyTorch, TensorFlow, scikit-learn), R, Julia AI Frameworks: Hugging Face Transformers, LangChain, LlamaIndex, AutoML tools Security Libraries: SHAP, LIME for explainability; Fairlearn, AIF360 for bias detection Infrastructure: Docker, Kubernetes, Terraform for secure AI deployment Integrate AI security with traditional security frameworks including Zero Trust architecture, IAM so