Skip to main content
Webinar
Thu, Jun 12, 4:00 PM - Fri, Jun 13, 5:00 PM (UTC)

Accelerating AI Inference with AI Studio: Llama.cpp vs. TensorRT LLM

About this event

Accelerating AI Inference with AI Studio: Llama.cpp vs. TensorRT LLM

 

Choosing the right inference framework can make or break your AI development strategy. In this webinar, Rafael Borges, a Software Architect and AI Engineer at HP will compare Llama.cpp and TensorRT to help you determine the best fit for your GPU edge application – using HP AI Studio to streamline testing, benchmarking, and customization.

 

You’ll gain actionable insights into:

 

  • Real world performance trade offs
  • Development and deployment considerations
  • Practical use cases and implementation blueprints in HP AI Studio
  • How HP AI Studio accelerates experimentation and optimization for edge AI inference

 

Live Webinar: Accelerating AI Inference with AI Studio: Llama.cpp vs. TensorRT LLM
Speaker: Rafael Borges, Software Architect and AI Engineer at HP
Date: June 12th
Time: 9:00am – 9:45am PST

 

Limited spots available – register today!

Event details
Online event
Thu, Jun 12, 4:00 PM - Fri, Jun 13, 5:00 PM (UTC)