Job Description
At Early Warning, we've powered and protected the U.S. financial system for over thirty years with cutting-edge solutions like Zelle , Paze , and so much more. As a trusted name in payments, we partner with thousands of institutions to increase access to financial services and protect transactions for hundreds of millions of consumers and small businesses. Positions located in Scottsdale, San Francisco, Chicago, or New York follow a hybrid work model to allow for a more collaborative working environment. Candidates responding to this posting must independently possess the eligibility to work in the United States, for any employer, at the date of hire. This position is ineligible for employment Visa sponsorship. Overall Purpose The Senior Manager, Artificial Intelligence Risk Management, will lead and manage the design, implementation, and oversight of the firm's AI Risk Management Program within the Second Line of Defense (SLOD). This role is responsible for providing management, independent review, challenge, and advisory support to ensure the organization's development and use of artificial intelligence - including generative AI - is safe, responsible, compliant, and aligned with the firm's risk appetite, ethical principles, and regulatory expectations. Reporting to the Senior Director of Enterprise Risk Program Governance within independent risk management, the Senior Manager partners closely with the first-line business managers, product, technology (including the CDO office), data science, as well as Compliance, Legal, Privacy, Operational, Third-Party and Technology & Security Risk, to embed AI risk requirements across the enterprise. The role plays a key part in enabling innovation while ensuring AI-related risks are appropriately identified, assessed, monitored, and governed. Essential Functions AI Risk Governance & Program Management Lead the development, maintenance, and ongoing enhancement of the enterprise AI Risk Management framework, policies, standards, procedures, and control expectations, aligned with industry-recognized frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001. Maintain and evolve the AI risk and control taxonomy, ensuring consistency with operational risk, model risk management, data governance, privacy, and technology risk frameworks. Oversee the development and use of risk management technologies and tooling used to inventory AI use cases, track risks, controls, issues, and approvals. Lead AI governance forums, providing independent challenge and driving risk oversight and escalation Participate in and support enterprise governance forums, committees, and working groups related to AI providing independent risk perspectives and recommendations. Develop and deliver training on the AI Risk Management program. Risk Measurement, Monitoring and Reporting Define and implement a consistent approach to measuring and monitoring AI-related risks, aligned with enterprise risk frameworks across domains (e.g., operational, model, data, technology risk) Analyze trends, emerging risks, and control performance related to AI risk exposures. Produce enterprise-level reporting and insights on AI risk posture, trends, and program effectiveness for senior management and governance forums Risk Identification & Assessment Develop and maintain AI use case risk assessment methodologies, including inherent risk identification, control evaluation, residual risk determination, and escalation criteria. Execute the second line of defense enterprise-level AI risk profile assessment to measure compliance with our approved risk appetite / tolerance. Embed AI risk considerations and requirements into other risk domain assessments (e.g., operational risk, model risk, third-party risk, data risk, privacy, and technology risk). Identify emerging AI risks related to bias, explainability, data quality, security, resilience, regulatory compliance, and customer impact. Independent Review, Challenge & Quality Assurance Lead effective independent review and challenge of first-line AI risk assessments, control design, mitigation strategies, and risk acceptance decisions. Execute and / or oversee quality assurance (QA) activities to assess adherence to AI risk management policies, standards, and governance requirements. Identify gaps, weaknesses, or inconsistencies in AI risk practices and ensure issues are documented, escalated, and tracked through remediation. Partner with other second-line risk domains to deliver integrated, holistic risk oversight of AI-enabled processes and products. Risk Reporting & Insights Develop and deliver insightful, enterprise-level AI risk re