Description:
Our AI Inference team productionalizes ML models - we train and deploy large neural networks for efficient inference on compute-constrained edge devices (CPU / GPU / AI ASIC). The nature of this role is multi-disciplinary - you will work at the intersection of machine learning and systems by building the ML frameworks and infrastructure that enable the seamless training, deployment, and inference of all neural networks that run on Autopilot and Optimus. Responsibilities Build robust AI framewor
Mar 20, 2025;
from:
dice.com