Skip to main content

I recently saw that a Z by HP ambassador, Ekaterina Butyugina, explored building an app with a chatbot feature that was powered by 3 different LLMs, ChatGPT, Mistral, and Gemma to answer Q & As based on the text file uploads users add using the Streamlit app (an open-source Python framework). 

When running, Gemma generated responses in approx. 3 seconds

 

For others building similar Q & A chatbots leveraging different LLMs…

  • Were times and results drastically different among your various integrated LLMs?
  • What were your times and findings?


Curious to hear! Linking Ekaterina’s Medium article with step-by-step instructions here:
Commercial, Gated, and Open-Source LLMs in Your File QA Chatbot

Be the first to reply!

Reply