We’re doing exploratory research on how AI teams actually run scaled workloads. Want to contribute?
Our team at HP is running a short, self‑guided AI‑moderated interview to better understand how teams approach AI development, model training, and production inference—and where today’s compute setups fall short. We’re especially interested in AI/ML engineers, data scientists, MLOps or platform engineers working with GPUs.
If your use cases align, you can also opt in to be considered for future lighthouse collaboration.
What to expect:
-
✅ ~15–20 minutes
-
✅ No scheduling — complete it anytime
Take the interview here:
👉 https://tinyurl.com/hpincai

