Skip to main content

Use AI Studio to track experiments and manage models with MLflow. After running a few modules through AI Studio, the next step is to view the experiments in MLflow. The system tracks experiments using hooks and serves the models to the MLflow model registry.

By navigating to the Monitor tab, you can view all your experiments, including their creation times. You can dig into each experiment to examine metrics and other saved details. In this case, although specific metrics weren’t set up, the model was saved to the registry. The presenter also highlights that any artifacts from the code and project structure are stored here.

The system features a web application allowing local inference through a portal to ensure the model performs as expected. You can check the number of model versions in the registry, view details like teammate contributions, and explore timestamps and analytics.

Finally, in the Publish Services tab, the presenter shows how to create a new service based on existing models. Selecting the desired model version from the MLflow registry, you can deploy and publish it by naming the service (in this example, "V2") and hitting the play button. The full deployment process is covered in more detail in the next video.

Have any questions? Leave a comment below!

Learn more about AI Studio here and see how Z by HP powers Data Science & AI Solutions.

Be the first to reply!

Reply