We use 3 different kinds of cookies. You can choose which cookies you want to accept. We need basic cookies to make this site work, therefore these are the minimum you can select. Learn more about our cookies.
Get access to AI Studio, Gen AI Lab, or Boost for free!
Collaborating with AI Studio Resolves GPU Limitations to Process More Data and Improve Results for Research Paper
In January I had the pleasure of working with Anbu Valluvan, an HP AI Studio tester, to resolve a GPU limitation issue his team was running into on their personal laptops when Google Colab GPU was not available due to budget constraints or GPU queuing. The team had collected, created data strategies to ingest up and process blood pressure datasets in appropriate sizes to train small models for use with edge devices. However, their access to GPU was limited. To achieve the accuracy requirements for the paper to qualify for peer review and acceptance, Anbu reached out to collaborate on the project using AI Studio because I had access to a NVIDIA GeForce RTX 4070 GPU which allowed him to experiment and prepare the project with multiple iterations when access to NVIDIA A100 GPU was limited . In a couple of minutes Anbu invited me to his AI Studio project, I synced assets and artifacts, and trained the model on my GPU. Because AI Studio saved artifacts, results, and cloned code to Github, Anbu could access them to continue his work. Anbu’s work validated that edge AI optimizations could enable trained deep learning models to run on resource-constrained devices with real-time inference without sacrificing accuracy. Research methods and techniques, including data splitting strategies, quantization, and model compression reduced memory usage while preserving accuracy. Data management involved MIMIC-IV waveform datasets, preprocessing to remove noise, and temporal alignment to ensure signal integrity. Congratulations on the publication Anbu! It was a fun project and neat way to collaborate using AI Studio! Published on March 26, 2025, this paper is the result of the team’s research: 5 Take-aways from Deep Learning for Non-Invasive Blood Pressure Monitoring: Model Performance and Quantization Trade-Offs Attentive BPNet Achieves Best Accuracy Used three deep learning models for continuous blood pressure monitoring: a residual-enhanced convolutional network, a transformer-based model, and Attentive BPNet. Attentive BPNet had the lowest mean absolute error (MAE): 6.36 mmHg for systolic BP (SBP), 4.09 mmHg for diastolic BP (DBP), and 4.56 mmHg for mean arterial pressure (MBP). Edge AI Enables Real-Time Blood Pressure Monitoring Post-training quantization reduced model size by 90.71% (from 1.40 MB to 0.13 MB) while maintaining accuracy, allowing deployment on edge devices . The residual-enhanced convolutional network maintained a 2.6 ms inference time, making real-time monitoring feasible. Signal Processing Improves Accuracy Preprocessing techniques such as adaptive filtering, peak-to-peak alignment, and statistical normalization enhanced model performance and stability. Quantization Reduces Computational Cost with Minimal Accuracy Loss Full-integer quantization provided up to 91.13% compression while keeping prediction errors within acceptable medical limits. However, transformer-based models struggled post-quantization, with inference latency exceeding 3000 ms. Clinical Feasibility Meets Medical Standards The residual-enhanced convolutional network met Association for the Advancement of Medical Instrumentation (AAMI) accuracy standards (±5 mmHg bias, <8 mmHg standard deviation), make this technique and model it a strong candidate for clinical deployment.
1 day ago
220
Recap from HP Amplify 2025: A New Era for AI Workflows
This week at HP Amplify 2025 , we shared something big: a vision for how AI development and high-performance computing are evolving—right now. And for those of you building the future—data scientists, engineers, creative pros, AI teams—we’re designing with you, not just for you. HP is partnering closely with NVIDIA to launch more AI Station products, including the HP ZGX Fury AI Station G1n, built on the powerful GB300 Grace Blackwell Superchip. In addition, HP is integrating NVIDIA RTX PRO with Blackwell GPUs across its desktop and mobile workstations to deliver top-tier AI performance. Bringing AI Closer to Home We know that speed, cost, and security are key when it comes to AI development—and local compute is taking center stage. Meet the HP ZGX Nano AI Station G1n—our version of the NVIDIA DGX Spark. It’s built for secure, on-prem AI development, with the power of a full DGX stack, plus HP’s own AI creation center software that helps teams build, optimize, and scale faster. This is the beginning of a whole new AI Station category from HP, created for teams who need performance without compromise. And that’s not all—we’re working closely with NVIDIA to bring even more to the table, including the HP ZGX Fury AI Station G1n, built on the ultra-powerful GB300 Grace Blackwell Superchip. It’s AI horsepower at the edge. New Devices That Redefine What’s Possible We also rolled out the next wave of performance machines: HP ZBook Fury G1i (16" and 18") The world’s most powerful 18" workstation. Built to handle Autodesk Inventor in the field, train LLMs on the go, or design without limits—this is mobile power with serious cooling and GPU options. HP Z2 Tower G1i Think “entry” means underpowered? Not here. This redesigned tower brings up to 600W GPU capacity and support for the NVIDIA RTX PRO 6000 Blackwell GPU—96GB of GDDR7 memory, real-time rendering, and smoother multi-app workflows for architects and creators alike. A Simpler, Smarter Z Brand Z by HP is now a distinct HP Z product line designed to bring greater clarity and create a more seamless experience for our partners and customers. This alignment lets us tap into the full strength of the HP brand, while staying focused on what we do best: delivering advanced technology that powers innovation and connects compute, data, and people. You’ll start to see changes in how we name and organize HP Z products. We’re keeping the names you know—like ZBook and Fury—but dropping terms like Firefly and Power for a cleaner structure. What’s new: A more intuitive numbering system (like “G1i” and “G1a”) "Ultra" = innovative form factor "Fury" = max performance More Than Hardware: AI Tools That Work With You To support the full AI journey, we’ve been building a software stack to match: HP AI Studio – A secure, local dev space with pre-configured project blueprints HP GenAI Lab – Tools to tackle model drift, bias, and build trust HP Z Boost – Optimizes GPU usage for AI workloads HP Data Science Stack Manager – Curated open-source tools, ready to go As HP expands its AI Station lineup for data science and AI development, you’re invited to register and stay updated on the latest news about the HP ZGX Nano and Fury AI Stations.