Skip to main content

🎥 Skip the Cloud: Train AI Models Using Local Resources with AI Studio (Webinar Recap)

  • 13 September 2024
  • 0 replies
  • 89 views
🎥 Skip the Cloud: Train AI Models Using Local Resources with AI Studio (Webinar Recap)

Thank you for joining our recent webinar, “Skip the Cloud: Train AI Models Using Local Resources with AI Studio”, and hope you found the session both engaging and informative!

If you’d like to revisit the webinar or explore additional resources, you can find the recording and related materials below. Additionally, if you're interested in learning more about AI Studio, feel free to schedule a one-on-one session with us here: https://calendly.com/sothan-thach-hp/

 

🎥Webinar Recording

 

We're looking forward to connecting with you at our next webinar on October 4th (Register Here!). Also get ready for an exclusive look at the latest innovations from HP and the Z by HP team, coming your way at HP Imagine 2024 on September 24th. Stay tuned for all the exciting updates!

 

Resources

Webinar Q&As

Q: Are there any specific requirements for data preparation before uploading an asset from local storage or S3? 

A: That's a great question. So, this isn’t about data management or pipeline processing functionality. When you update or attach a pointer to a new asset, you’re essentially just pointing to a folder. Your assets or data are treated as objects. If you need to create a data pipeline, that would have to be done programmatically—either through your notebook or via your usual processes to bring in the data for management. 

 

Q: Does HP handle the syncing between all peers in the background? 

A: Yes, absolutely. Peer-to-peer syncing is handled by a mechanism called SyncThing. This allows, for example, if I open a project and place something in the shared folder in the notebook, it will automatically sync between me and my collaborators. This syncing is peer-to-peer and helps facilitate quick object sharing among team members. 

 

You can turn off this sync feature if needed, but it’s useful for fast collaboration. 

 

As for metadata syncing, that's managed through MongoDB, hosted by HP. It handles metadata about the account and definitions, including data pointers. For instance, if you or a collaborator point to a specific dataset, but they don't have the right permissions (e.g., access to the S3 bucket), your data policies will prevent them from using it. The endpoint will effectively be locked, and they won’t have access. HP’s OneCloud service manages the metadata syncing in the background. 

 

Q: Will you be adding support for additional NVIDIA GPU drivers for Tesla K80 GPUs? 

A: AI Studio is designed to be hardware- and OS-agnostic. Our vision is to empower data scientists who work across various hardware and operating systems by providing the tools they need. What we aim to facilitate is easy access to the compute resources they require, including GPUs. 

 

Currently, AI Studio supports NVIDIA GPUs with drivers version 528.89 or higher. If Tesla K80 is compatible with those drivers, then it's already supported 

 

Q: Will HP support Azure AI Studio development environments, and do you plan to offer your own compute resources in the future? 

A: First, Azure is definitely part of our roadmap. Our goal is to help you access the compute power you need, no matter what tools, operating systems, or hardware you're using. 

 

That’s the vision. We already have integrations with several Microsoft products—Windows, WSL 2, GitHub, and Azure Blob Storage are a few examples. Upcoming additions on the roadmap include VS Code, Azure cloud services, AWS cloud services, and more. We've listened to feedback from our users, and they appreciate having access to local GPUs for training and development. However, sometimes local compute isn’t enough, and they need additional power. 

 

The idea is to allow users to seamlessly transfer their projects and configurations to another environment with greater compute resources—whether that’s an on-premise server or a cloud service. 

 

Ultimately, when you're ready to deploy or train, you should be able to choose where and how that happens—whether it's on your local hardware or in the cloud. The goal is to give you the flexibility to work wherever you need. 

 

Q: Do you plan to support community-developed integrations to expand the capabilities of AI Studio? 

A: Currently, while you can share your applications or technologies with your colleagues through the community, we don’t have a formal marketplace for sharing libraries at this time. 

 

However, it’s an interesting idea, and we’re definitely considering it for the future. 

 

Q: Does AI Studio have a Vision Language Model (VLM) that can recognize and label elements in architectural blueprints or plans, such as walls, doors, and other features? 

A: Right now, we're working on adding more integrations to support automation for tasks like labeling, annotation, and data preparation. Our goal is to let you use the tools you’re already familiar with, rather than trying to replace them, while still providing the compute resources you need to get the job done. 

 

Currently, we don’t have these additional integrations available. However, we’ve heard your feedback loud and clear and recognize the need for more functionality. We understand that integrations with cloud compute deployments, Microsoft Azure, NVIDIA, and tools for labeling, annotation, and data processing are crucial. 

 

At the moment, you can still implement these integrations programmatically through notebooks, but we don’t have a simplified, no-code API or user interface for these tasks yet. 

 

Q: Will we have the option to integrate open-source models into our projects during the project setup? 

A: Yes, we are currently developing that capability. In the next release of AI Studio, we’ll introduce integration with NVIDIA’s catalog, which includes GPU-optimized models. This catalog features both open-source and NVIDIA-vetted models. 

 

AI Studio users will have immediate access to these NVIDIA assets, including free models for transfer learning. Additionally, we’re working on integrating with other popular model hubs, which should be available in the next release or soon after. 

 

Q: Is the end goal for AI Studio to become API-agnostic, allowing integration with third-party development tools and the Ollama Web UI? 

A: Yes, you can integrate open-source models into your projects during setup. 

Currently, in the custom environment, you can create a workspace and use base images provided by us to get started quickly. You can specify whether you're using CPU, GPU, or in the future, NPU. Within this environment, you can add and install libraries, such as LLaMA, using package management tools like Conda, Pip, and Git. 

 

While it can be tricky to ensure everything works seamlessly due to the containerization, we provide the flexibility to install and work with libraries using these tools. 

 

Q: In the upcoming release, which Vision Language Models (VLMs) are prioritized on your roadmap? Specifically, will models like Microsoft's Florence-2, PaliGemma, YOLO, and SAM 2 be included? 

A: YOLO is definitely supported in our AI Studio. We’re focusing on three main use cases: traditional machine learning, deep learning, and computer vision. YOLO, being a popular tool for computer vision, is included in the packages and tools we support. 

 

Regarding Generative AI improvements and additional frameworks or models, we are integrating tools like LLaMA and working on prompt quality improvements with our partner, Galileo. 

 

You can also create a custom workspace in AI Studio and include packages like YOLO as needed. As long as you configure your container properly, you should be able to use these tools effectively. 

 

Be the first to reply!

Reply