← Back to Careers
Member of Technical Staff
Full-time•San Francisco, CA
Join our team to build the future of GPU optimization and AI infrastructure. You'll work directly with the team to define our technical direction and build the core systems that power our GPU optimization platform.
What You'll Do
- • Build scalable infrastructure for AI model training and inference
- • Lead technical decisions and architecture choices
- • Mentor junior engineers and interns
What We Look For
Core Technical Expertise
- • GPU Fundamentals: Deep understanding of GPU architectures, CUDA programming, and parallel computing patterns.
- • Deep Learning Frameworks: Proficiency in PyTorch, TensorFlow, or JAX, particularly for GPU-accelerated workloads.
- • LLM/AI Knowledge: Strong grounding in large language models (training, fine-tuning, prompting, evaluation).
- • Systems Engineering: Proficiency in C++, Python, and possibly Rust/Go for building tooling around CUDA.
Ideal Background
- • Publications or open-source contributions in GPU computing or ML/AI for code are a plus.
- • Hands-on experience with large-scale experiments, benchmarking, and performance tuning.