Skip to main content

In this process, a BERT Q&A model is saved to the MLflow model registry, allowing AI Studio to deploy and publish it for local inference. Version 2 of the model is deployed, providing a local endpoint via a Swagger API portal. By clicking the link, the model can be interrogated directly through the API, enabling the ability to try it out, make changes, and execute commands. Additionally, a web application tied to the model is saved in the MLflow registry, facilitating local inference. This ensures the model can be thoroughly tested and interrogated before being deployed to production.

Have any questions? Leave a comment below!

Learn more about AI Studio here and see how Z by HP powers Data Science & AI Solutions.

Be the first to reply!

Reply