Skip to main content

Some years ago, one of the things that was hyped in the AI Community but that I still think has huge relevance for the future in AI is Federated Learning and Privacy Preserving Machine Learning (PPML). This comes from an idea that future devices will have more than enough power to run the majority of our biggest models, or that models will be efficient enough to run on them. This without losing the aspect of keeping our data private and secure.

What do you think about this? Do you think this is where AI Development would head eventually? What others directions could be the most feasible and with the most usage our future? A couple of those that come to my mind are Embodied AI and Causal Machine Learning.

Embodied AI and Causal ML have solid potential. Embodied AI could enhance physical interactions, while Causal Machine Learning might improve our understanding of complex systems. The future of AI will likely involve a blend of these innovative approaches.


I do! Especially for experimentation and prototyping. I believe most of us code locally, experiment, iterate, and use a subset of what we need or plan to use for scale to validate performance before investing too much only to make a change down the road. 

I think Google’s infamous paper “Attention is All You Need” is a good example of this. That team used eight multi-head self-attention mechanisms e1 GPU per multi-head] to validate the effectiveness of encoder-decoder transformers. 

8 GPUs is 2 HPC workstations… that’s all it to takes to spark a GenAI revolution and I believe experimenters and prototypers will continue this approach, saving cloud computing for mega jobs, foundational model training, and production inference.


Reply