Job Description
COMPANY OVERVIEW Jenner & Block LLP is a law firm with a fearless reputation and global reach, high-level problem solvers in litigation, government controversies, investigations, regulatory challenges, and complex corporate transactions. With offices in Century City, Chicago, London, Los Angeles, New York, San Francisco, and Washington, DC, the firm represents a wide range of clients, counseling Fortune 100 companies, large privately held corporations, major nonprofits, top universities, private equity investors, and Native American tribes. Consistently recognized as a firm that stands up for its values with its commitment to justice and community service, Jenner has been named the No. 1 pro bono firm 13 times in the United States by The American Lawyer . POSTION SUMMARY Jenner & Block is seeking an experienced Enterprise Data Engineer to join our Information Technology team. In this role, you will design, build, and maintain the data pipelines and infrastructure that power firm-wide reporting and analytics. Working primarily within the Microsoft Fabric ecosystem, you will serve as a key contributor to the architecture and evolution of the firm's Enterprise Data Repository, partnering closely with data engineers, IT leadership, and governance stakeholders to deliver reliable, scalable data solutions across the firm. ESSENTIAL JOB FUNCTIONS Design, build, and maintain end-to-end data pipelines that ingest structured and semi-structured data from source systems including Elite 3E Cloud, Chrome River, Intapp suite, SharePoint, and other firm applications Develop and manage batch and incremental data ingestion workflows using Microsoft Fabric Data Factory pipelines and related orchestration tools; implement robust error handling, retry logic, alerting, and logging frameworks across all pipeline components Monitor pipeline performance, identify bottlenecks, and optimize throughput, latency, and resource utilization; collaborate with application owners and vendors to define and maintain data contracts, API integrations, and file-based data feeds Collaborate with peer data engineers on the architecture, design, and evolution of the firm's Enterprise Data Repository “EDR”, serving as the authoritative data store for firm-wide reporting and analytics. Develop and enforce data modeling standards including dimensional modeling, star schema design, and normalized transactional schemas as appropriate to use case; manage EDR schema lifecycle including versioning, migration scripts, and backward compatibility considerations Partner with the Database Administrator and Data Engineers to optimize query performance, indexing strategies, and resource governance within the EDR environment; maintain comprehensive data dictionaries and metadata documentation to support data literacy across the organization Serve as a Microsoft Fabric platform engineer, assisting with workspace configuration, capacity management, and governance settings Build and manage Lakehouses, Warehouses, Dataflows Gen2, and Notebooks within Microsoft Fabric to support unified analytics and BI workloads; implement OneLake architecture patterns to support hybrid cloud-to-on-premises integration scenarios Manage integration between Microsoft Fabric and Power BI semantic layers, enabling self-service analytics capabilities across the firm; stay current with Microsoft Fabric release cadence and evaluate new features for adoption that advance firm data strategies Collaborate with the Data Governance team to establish and enforce data quality rules, validation frameworks, and automated anomaly detection within the data pipeline ecosystem Implement and maintain data lineage tracking tools (e.g., Purview or equivalent) and support data privacy and security controls including column-level and row-level security, data masking, and access auditing Partner with IT Managers, peer Software / Data Engineers and the Database Administrator to align data infrastructure with broader application and reporting goals; participate in the team's change process, contributing technical proposals and peer reviews Create and maintain thorough technical documentation including architecture diagrams, pipeline runbooks, data flow maps, and operational procedures; provide technical guidance and knowledge transfer to team members, and communicate project status, risks, and outcomes clearly to IT leadership and stakeholders QUALIFICATIONS AND REQUIREMENTS Bachelor's degree in Computer Science, Information Systems, Data Engineering, or a related discipline; or equivalent professional experience; minimum 5 years of professional experience in a data engineering, data architecture, or closely related role Demo