Job Description
About ClickHouse Recognized on the 2025 Forbes Cloud 100 list, ClickHouse is one of the most innovative and fast-growing private cloud companies. With more than 3,000 customers and ARR that has grown over 250 percent year over year, ClickHouse leads the market in real-time analytics, data warehousing, observability, and AI workloads. The company’s sustained, accelerating momentum was recently validated by a $400M Series D financing round. Over the past three months, customers including Capital One, Lovable, Decagon, Polymarket, and Airwallex have adopted the platform or expanded existing deployments. These customers join an established base of AI innovators and global brands such as Meta, Cursor, Sony, and Tesla. We’re on a mission to transform how companies use data. Come be a part of our journey! About the Role AI applications are being built faster than teams can monitor, debug, or trust them. ClickHouse recently acquired Langfuse — the leading open source LLM observability platform — making it a core part of the ClickHouse product stack. Together, ClickHouse and Langfuse offer engineering teams the most powerful combination in the market: real-time, high-performance analytics infrastructure paired with best-in-class LLM tracing, evaluation, and observability tooling. This role sits at the center of that combined story. We're looking for a Langfuse Solutions Architect who is already embedded in the AI observability ecosystem — someone who understands how engineering teams instrument and evaluate LLM applications, and can credibly represent the full ClickHouse + Langfuse platform to the teams that need it most. This is not a generalist SA role. You'll be our dedicated technical presence in the LLM observability space — opening doors through the Langfuse community, deepening relationships with AI engineering teams, and helping them get the most out of a platform that now spans from raw data infrastructure to production LLM monitoring. You'll work at the intersection of community, pre-sales, and technical advisory, and you'll be the person who makes the ClickHouse + Langfuse stack the obvious choice for teams building serious AI applications. What You'll Be Doing Pre-Sales & Technical Advisory Lead technical evaluations with AI engineering teams considering ClickHouse as their observability data store, from initial architecture review through POC and production deployment Engage directly with data engineers, ML engineers, and platform architects to understand their LLM application stack, trace volumes, evaluation workflows, and query patterns — and map those requirements to ClickHouse | Lanfguse capabilities Work across all levels of customer organizations, from individual contributors building LLM pipelines to CTOs making infrastructure decisions Design and deliver reference implementations, schema designs, and ingestion patterns optimized for LLM trace data at scale Pipeline & Revenue Contribution Source and qualify pipeline directly through ecosystem relationships and community engagement — this role is expected to open doors, not just walk through them Partner with ClickHouse AEs to progress and close opportunities within the AI and LLM observability segment Advocate internally for product improvements and integration enhancements that strengthen the ClickHouse + Langfuse story Ecosystem & Community Presence Serve as ClickHouse's primary technical voice in the Langfuse community — contributing to forums, engaging on GitHub, participating in events, and building authentic credibility with AI engineers and developers Develop relationships with the Langfuse core team and ecosystem partners to identify joint GTM opportunities and integration improvements Create technical content — blog posts, tutorials, reference architectures, and demo environments — that showcases ClickHouse| Langfuse as the analytics backbone for LLM observability workloads What You Bring Hands-on experience in the LLM observability or AI monitoring space — whether at a vendor or as a practitioner building and operating LLM applications in production Technical depth in the modern AI stack — you're comfortable discussing prompt engineering, RAG architectures, evaluation frameworks, token economics, and the data infrastructure that supports them Customer-facing experience — pre-sales, solutions engineering, developer advocacy, or technical account management. You've navigated technical conversations with real stakes and know how to build trust with engineering teams Strong foundation in data infrastructure — experience with analytical data