← Back to jobs

Senior Software Engineer - Real-Time Ingestion

Yahoo
FULL_TIME Remote · US US USD 128250–266875 / month Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Yahoo serves as a trusted guide for hundreds of millions of people globally, helping them achieve their goals online through our portfolio of iconic products. For advertisers, Yahoo Advertising offers omnichannel solutions and powerful data to engage with our brands and deliver results. About The Team Our platform is the foundational identity and data layer for 900M+ monthly active users, serving 2.5B+ profiles at massive scale. We are building a predictive, identity-centric insights engine—ensuring our audience is understood with precision to deliver hyper-personalized experiences and advertising solutions across all our digital properties. Our mission centers on first-party data strategy: capturing, enriching, and activating audience signals to build a 360-degree view of every user. We operate under a Privacy-by-Design philosophy, adhering to global regulations (GDPR, CCPA) and industry security standards, while leveraging a cloud-native stack across GCP (BigQuery, Spanner, Dataflow, Composer, GKE) and AWS, with modern MLOps practices to deliver measurable business impact. About The Role As a Senior Data Engineer on the Consumer Data Organization(CDO), you will design and implement streaming data pipelines that process billions of user signals daily, maintaining a real-time view of 2.5B+ profiles. Your pipelines handle critical third-party ID mutations, behavioral signals, and identity updates with sub-second latency, ensuring data freshness for downstream activation and monetization use cases worth hundreds of millions in annual revenue. You will build scalable Kafka-based streaming infrastructure processing millions of events per second, implementing Apache Beam/Dataflow jobs for stream processing, enrichment, and validation. Your work requires balancing extreme throughput requirements, data quality guarantees, and operational reliability while ensuring privacy-compliant handling of sensitive user data. This role demands expertise in real-time streaming architectures, distributed messaging systems (Kafka, Pub/Sub), and production data engineering at massive scale. You will collaborate closely with Storage, Privacy, and Platform teams to ensure efficient data flow from ingestion to activation. Key Responsibilities Develop and optimize real-time streaming pipelines for third-party ID mutations, behavioral signals, and user event ingestion Build scalable Kafka-based data pipelines handling millions of events per second with exactly-once processing semantics Implement Apache Dataflow/Beam jobs for stream processing, enrichment, validation, and transformation of user signals Design comprehensive monitoring and data quality checks ensuring pipeline reliability, data freshness, and SLA compliance Collaborate with Storage team on efficient Cloud Spanner write patterns, schema design, and high-throughput mutation strategies Optimize pipeline performance to reduce lag, improve throughput, and minimize processing costs in GCP infrastructure Implement dead letter queues, retry logic, and error handling strategies ensuring data loss prevention Troubleshoot production data issues including pipeline failures, data quality problems, and performance degradation Work with Privacy team to ensure compliant data handling, PII protection, and sensitive data detection in real-time streams Create comprehensive documentation for pipeline architecture, operational runbooks, and on-call procedures Participate in on-call rotation supporting production streaming pipelines with 99.9% uptime SLA Partner with upstream data producers to ensure consistent event schemas and data quality Required Qualifications Education Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or related technical field Experience 5+ years data engineering experience building production data systems 3+ years hands-on experience with real-time/streaming data processing systems at scale 2+ years with GCP (Dataflow, Pub/Sub, BigQuery, Spanner, GCS) or AWS equivalents (Kinesis, EMR, DynamoDB) Technical Skills Strong proficiency in Python, Java, or Scala for data pipeline development Hands-on experience with Apache Kafka, Google Pub/Sub, or other distributed messaging platforms Experience with Apache Beam, Apache Dataflow, or Apache Spark Streaming for stream processing Understanding of stream processing patterns: windowing, watermarks, exactly-once semantics, state management SQL proficiency and experience with distributed databases (Spanner, Cassandra, DynamoDB) Familiarity with data serialization formats: Avro, Protobuf, JSON, Parquet Competencies Strong problem-solving sk