← Back to jobs

HPC Support Engineer

SAIC
FULL_TIME Remote · US Charlottesville, VA, US Posted: 2026-05-11 Until: 2026-07-10
Apply Now →
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Job ID 2610673 Location Charlottesville, VA, US Date Posted 2026-03-26 Category Engineering and Sciences Subcategory Systems Engineer Schedule Full-Time Shift Day Job Travel No Minimum Clearance Required Top_Secret Clearance Level Must Be Able to Obtain TS/SCI Potential for Remote Work ORA_ON_SITE Description SAIC is looking for a highly qualified HPC Support Engineer to support the Army’s Golden Dome initiative. The engineer will support users executing workloads within Linux-based High Performance Computing (HPC) cluster environments used for distributed compute workloads, simulation environments, and GPU-enabled processing. The environment will include multi-node Linux compute clusters workload scheduling platforms such as Slurm or PBS distributed parallel compute workloads utilizing MPI or OpenMP GPU-enabled compute resources supporting CUDA-based processing high-performance networking technologies including RDMA / InfiniBand The system will be used to support scientific computing, simulation workloads, and other distributed compute operations within a secure research environment. Candidates should be comfortable working within cluster-scale computing environments where performance, scheduler configuration, and distributed workload execution are critical operational factors. The HPC Support Engineer will assist users executing computational workloads within HPC cluster environments. The role focuses on supporting distributed compute workloads troubleshooting job execution issues assisting users with scheduler job submission scripts identifying workload performance bottlenecks supporting GPU-enabled workloads promoting efficient cluster utilization and HPC best practices Candidates should have experience working with distributed compute workloads and Linux-based HPC environments. Core Technical Capabilities Candidates should demonstrate capability in most of the following areas. HPC Workload Execution Experience supporting execution of distributed workloads on HPC cluster platforms. Candidates should understand how compute workloads interact with cluster schedulers, compute nodes, and distributed resources. Workload Scheduling Platforms Experience executing and troubleshooting workloads using schedulers such as Slurm PBS / PBS Pro Torque Grid Engine Candidates should understand job submission workflows and resource allocation concepts such as CPU, memory, and GPU scheduling. Candidates should be comfortable reading and troubleshooting scheduler job submission scripts used to execute distributed workloads. Linux Systems Usage Strong Linux experience including command-line system usage execution of compute workloads within Linux environments troubleshooting application execution issues Experience With RHEL-based Environments Is Preferred. Distributed Compute Workloads Experience supporting distributed workloads utilizing parallel computing frameworks such as MPI OpenMP Experience supporting the compilation and execution of scientific or engineering applications within Linux HPC environments. Familiarity with common HPC programming languages and compiler toolchains including C/C++ Fortran Candidates should understand how compiled applications interact with scheduler configuration, compute resources, cluster networking, and distributed runtime environments. Experience troubleshooting application build or runtime issues related to compiler configuration, library dependencies, or MPI environments is desirable. Familiarity with common HPC compiler toolchains such as GCC, Intel, or LLVM-based compilers is desirable. GPU Compute Workloads Experience executing or supporting workloads utilizing GPU-enabled compute environments and CUDA frameworks is desirable. Performance Troubleshooting Ability to identify issues affecting workload execution including inefficient resource allocation scheduler configuration issues application execution failures distributed compute performance bottlenecks Automation and Operational Tooling Experience writing scripts or tooling using languages such as Bash Python Automation experience supporting workload execution or ope