← Back to jobs

Software Development Engineer - AI Enablement

Auger
FULL_TIME Remote · US Bellevue, WA, King, US USD 12500–16667 / month Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Build at Auger Auger is the autonomous operating system for supply chains. It connects enterprise supply chain systems—ERP, WMS, TMS—into a single data layer, then uses AI to detect problems, evaluate trade-offs, and execute decisions automatically. The platform eliminates the coordination tax: the time and capital lost when disconnected systems force humans to become the integration layer between planning and execution. Actions that previously required days of meetings and manual coordination happen in seconds, within constraints the customer defines. Founded by Dave Clark and backed by $100M from Oak HC/FT. Headquartered in Bellevue, Washington. About The Team & Role Auger is building an autonomous operating system for the supply chain. Our customers rely on Auger to understand reality and change it: reporting, AI-powered decision support, and write-back execution systems that operate at scale. This role is data-centric software engineering. We hold a high bar for quality: you’ll help turn messy, customer-shared data into a unified semantic layer that analytics, AI workflows, and execution paths can rely on. This is not a “move data from A to B” role. You are expected to own data correctness, semantic correctness, and operability for the data lifecycle. What You’ll Do As a Software Development Engineer, you have a solid data engineering background. You will deliver end-to-end on assigned pipelines and transformation work, help troubleshoot production issues, and consistently improve quality through tests, checks, and sound operational habits. Build and maintain data pipelines lifecycle. Ship production-grade transformation logic and operational outputs, using schema contracts and measurable validation. Work in an agent-native style—use AI tools to move faster on data exploration, data transformation, data queries, data investigation, and refactors. Contribute to reusable patterns and tooling (including agent-assisted workflows) so the team can discover schemas, draft transforms, generate SQL faster, and troubleshoot with less one-off work. Build and maintain the integration points between data pipelines and ML pipelines. Implement schema-bound datasets that transform pipeline outputs into ML-ready inputs, and write ML results back to the semantic layer following established contracts. Contribute to schema design and enforce data contracts that keep model logic cleanly separated from the system of record. Operate what you build: monitoring and alerting as appropriate, participating in incidents remediations, and following through so issues do not repeat. Practice test-driven habits for data: clarify correctness for the datasets you touch; add automated checks and regression coverage where it matters; turn bugs and incidents into fixes that stick. Partner with product, science, and platform teammates to clarify requirements, flag tradeoffs early, and deliver work that holds up to customers. What You Bring Degree in Computer Science, Mathematics, Statistics, or another data-intensive discipline (or equivalent practical experience). 4+ years of professional development experience with strong hands-on SQL and Python in production (Spark or equivalent large-scale batch processing preferred; Scala/Flink/Beam a plus). 3+ years in data work (structured and semi-structured), modern warehouses/lakehouses, and practical schema design in evolving domains. Ownership mindset on production systems: you debug methodically, improve reliability over time, and connect your work to customer/product outcomes. Hands-on experience with lakehouse/warehouse patterns, incremental processing, and basic performance/cost awareness. Notebook fluency and the judgment to structure notebook work so it is reviewable and promotable. Validation-first habits for data: meaningful checks between layers, DQ where it counts, and regression protection for critical transforms. Agent-native fluency with verification—you treat generated SQL/pipelines as proposals until proven. Clear communication and collaboration: you ask good questions, drive work to completion, and leave the codebase better than you found it. A plus if you have experience in supply chain, planning, or fulfillment domains. Compensation & Benefits As part of our commitment to People Powered Greatness, we invest in our team members with competitive compensation and a comprehensive benefits to support your health, financial future, and daily life. The package includes medical, dental, and vision coverage, a 401(k) with company match, and commuter benefits. Total compensation may include a combination of a competitive base salary and equity. Your initial placement within our salary rang