AI-podden Ather Gattami
-
- Technology
One of Sweden’s most popular technology podcasts that sheds light on current AI developments with guests from all sides of society.
-
AI in action: A real-world IT approach
In this episode, Ather interviews Ramprakash Ramamoorthy, AI director at ManageEngine, who clarifies AI's role as a practical tool for pattern recognition and productivity in IT management, rather than a threat to human jobs. Ram discusses how ManageEngine has integrated AI across their product suite since 2011 to optimize IT operations and customer service, enabling proactive management decisions. He emphasizes the evolution of AI from a hyped technology to a crucial component in streamlining business processes and enhancing decision-making capabilities within the IT sector.
-
AI-podden News - March
This week our guest host, Sifted's Mimi Billing and Ather discuss March's AI developments, including; Elon Musk has sued OpenAI, alleging violation of their nonprofit agreement through a Microsoft collaboration and deviation from open-source values. Concurrently, Musk introduced his open-source language model, GROK. Meanwhile, Anthropic's new AI, Cloud 3, reportedly surpasses ChatGPT and Google's Gemini in performance. The episode also covers the EU AI Act and a UN AI resolution, drawing varied perspectives on their impact. Additionally, the resignation of Stability AI's CEO and the company's financial woes post a $101 million October 2022 funding round were discussed, underlining the difficulties in AI monetization and the saturated startup landscape.
-
AI-podden News - February
This week our guest host, Sifted's Mimi Billing and Ather discuss February's AI developments, including OpenAI's new video tool, Sora, and its careful release strategy amidst ethical concerns. They explore the bias issues in Google's Gemini AI and contrast different AI philosophies, highlighting the debate between practical AI applications and the pursuit of genuine intelligence. The conversation also covers NVIDIA's significant market valuation, the concept of an AI bubble, and Google's approach to AI-written content for publishers. Investments in robotics by NVIDIA and OpenAI are mentioned as forward-looking strategies. Finally, the hosts ponder the hype around AI compared to past excitement in the crypto industry.
Don't forget to subscribe and follow us on Linkedin. -
Democratisation of Generative AI
Salman Avestimehr, USC professor and FedML co-founder, discusses AI, federated learning, and large language models. This week's topics include; AI definition, distributed and federated systems research, current challenges in generative AI, and trustworthiness in ML. Ather and Salman explore the future of accessible LLMs on personal devices, stress the importance of data, and discuss challenges like distinguishing machine-generated outcomes. The conversation touches on enforcing regulations in the era of open-source tools and recommends the book "Life 3.0”
Don't forget to subscribe and follow us on Linkedin. -
Financial trading as a game
In this episode of the AI Podcast, host Ather Gattami interviews Michal Sustr, co-founder of EquiLibre, a company that uses game theory and reinforcement learning for trading.
Michal discusses the company's approach to trading, which is based on the idea that markets can be modelled as games. He explains that EquiLibre uses game theory to develop algorithms which can predict the behaviour of other traders and make profitable trades.
Ather and Michal also discuss the future of AI, and Sustr predicts that large language models (LLMs) will continue to be a major area of research in the coming years. Michal believes that LLMs will eventually be able to reason and solve problems in a way that is indistinguishable from human intelligence.
Don't forget to subscribe and follow us on Linkedin. -
Generalisation vs Reasoning
Reinforcement learning is a type of machine learning that allows agents to learn by interacting with their environment. DeepMind researcher, David Abel is interested in using reinforcement learning to understand and build intelligent agents. One of the big questions in AI is what exactly "intelligence" is. Another big question is how to build intelligent agents that can reason and solve problems. Abel believes that it is important to achieve conceptual clarity in AI.
Abel's research focuses on AI's core scientific questions, emphasizing the need for conceptual clarity about intelligent behavior and agents. David express skepticism about creating AI with certain reward functions, suggesting that multi-criteria objectives could better align AI with human interests.
Don't forget to subscribe and follow us on Linkedin.