← Back to jobs

Systems Performance Engineer

Micron Technology
INTERN Remote · US Austin, TX, Travis, US Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. The engineer will work with senior engineers and researchers on AI training and inference systems, with a strong focus on LLM execution engines, data and KV‑cache management, and multi‑tier memory hierarchies across modern data‑center platforms. The role centers on end‑to‑end performance characterization and optimization of large‑scale AI workloads, spanning single‑node GPUs to rack‑scale inference deployments. Responsibilities include systems software development, workload engineering, performance analysis, and memory‑centric optimization for LLM training, serving, and agentic AI frameworks. The work emphasizes real customer inference and training workloads, emerging memory technologies (HBM, LP/DRAM, CXL, NVMe, remote memory fabrics), and the economics and token‑level efficiency of large‑scale inference systems. This role combines hands‑on engineering with applied systems research, directly influencing next‑generation AI platforms and memory‑driven system architectures. Key Responsibilities Build, develop, and improve systems software tools for profiling, tracing, and analyzing LLM training and inference workloads Design and evaluate KV‑cache and state‑management strategies for LLM serving, including reuse, eviction, compression, tiering, and lifecycle management Build and extend benchmarking, simulation, and emulation frameworks for AI inference and training across heterogeneous memory tiers Develop and evaluate data placement, migration, and prefetching algorithms across HBM, LP/DRAM, CXL memory pools, NVMe, and remote memory systems Characterize and optimize LLM execution engines (prefill/decode), including attention behavior, batching strategies, and token‑level performance Analyze rack‑scale and cluster‑scale inference deployments , focusing on throughput, latency, utilization, cost, and token economics Develop workloads that reflect real customer AI systems , including LLM serving, agentic pipelines, retrieval‑augmented generation, multimodal inference, and long‑context workloads Instrument and analyze performance across GPUs, CPUs, memory subsystems, interconnects, and storage , identifying end‑to‑end bottlenecks Evaluate system interactions across OS, runtime layers, containerized deployments, and distributed inference stacks Automate performance measurement, experimentation, and analysis workflows to improve repeatability and scale Summarize findings into clear methodologies, internal reports, and technical presentations for engineering and leadership audiences Collaborate across engineering, architecture, and research teams, and with external academic and industry partners Provide actionable feedback to product, architecture, and platform teams to influence future AI systems and memory designs Required Qualifications Bachelor’s or Master’s degree, or equivalent experience, in Computer Science, Electrical Engineering, or a related field Strong foundation in operating systems, memory systems, parallel computing, or distributed systems Proficiency in systems programming and analysis using C/C++ and Python Experience working in Linux environments , including debugging, profiling, and automation Solid understanding of modern server architectures , including GPUs, CPUs, cache hierarchies, NUMA, and memory subsystems Experience analyzing performance data and reasoning about system‑level behavior Strong written and verbal communication skills Ability to work independently on scoped problems and collaboratively on larger system efforts Preferred Qualifications Experience with LLM training and inference systems , including execution runtimes and serving frameworks Hands‑on experience with KV cache management , long‑context execution, or stateful inference workloads Familiarity with GPU architectures and AI accelerators, including memory and interconnect behavior Experience with multi‑tier memory s