MLOps is clearly not “just DevOps for machine learning”. Rather it needs to combine methodologies from both worlds – data science and operations. One of the challenges is performing routine R&D tasks for ML within the context of an organization with changing HW setups. Do the data scientists have their own machines? Do they need to spin up a remote VM to perform their research? Where and how to access machines for training runs once the code has been tested? In this session we will show how having tight coupling between orchestration and experiment tracking within the MLOps stack allows providing a productivity-boosting link between R&D teams and the expensive hardware available for their work.