87 episodes

Interviews with scientists and engineers working in Machine Learning and AI, about their journey, insights, and discussion on latest research topics.

Jay Shah Podcast Jay Shah

    • Science
    • 5.0 • 14 Ratings

Interviews with scientists and engineers working in Machine Learning and AI, about their journey, insights, and discussion on latest research topics.

    Role of Large Language Models in AI-driven medical research | Dr. Imon Banerjee

    Role of Large Language Models in AI-driven medical research | Dr. Imon Banerjee

    Dr. Imon Banerjee is an Associate Professor at Mayo Clinic in Arizona, working at the intersection of AI and healthcare research. Her research focuses on multi-modality fusion, mitigating bias in AI models specifically in the context of medical applications & more broadly building predictive models using different data sources. Before joining the Mayo Clinic, she was at Emory University as an Assistant Professor and at Stanford as a Postdoctoral fellow.

    Time stamps of the conversation
    00:00 Highlights
    01:00 Introduction
    01:50 Entry point in AI
    04:41 Landscape of AI in healthcare so far
    06:15 Research to practice
    07:50 Challenges of AI Democratization
    11:56 Era of Generative AI in Medical Research
    15:57 Responsibilities to realize
    16:40 Are LLMs a world model?
    17:50 Training on medical data
    19:55 AI as a tool in clinical workflows
    23:36 Scientific discovery in medicine
    27:08 Dangers of biased AI models in healthcare applications
    28:40 Good vs Bad bias
    33:33 Scaling models - the current trend in AI research
    35:05 Current focus of research
    36:41 Advice on getting started
    39:46 Interdisciplinary efforts for efficiency
    42:22 Personalities for getting into research

    More about Dr. Banerjee's lab and research: https://labs.engineering.asu.edu/banerjeelab/person/imon-banerjee/

    About the Host:
    Jay is a PhD student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 46 min
    Algorithmic Reasoning, Graph Neural Nets, AGI and Tips to researchers | Petar Veličković

    Algorithmic Reasoning, Graph Neural Nets, AGI and Tips to researchers | Petar Veličković

    Dr. Petar Veličković is a Staff Research Scientist at Googe DeepMind and an Affiliated lecturer at the University of Cambridge. He is known for his research contributions in graph representation learning; particularly graph neural networks and graph attention networks. At DeepMind, he has been working on Neural Algorithmic Reasoning which we talk about more in this podcast. Petar’s research has been featured in numerous media articles and has been impactful in many ways including Google Maps’s improved predictions.

    Time stamps
    00:00:00 Highlights
    00:01:00 Introduction
    00:01:50 Entry point in AI
    00:03:44 Idea of Graph Attention Networks
    00:06:50 Towards AGI
    00:09:58 Attention in Deep learning
    00:13:15 Attention vs Convolutions
    00:20:20 Neural Algorithmic Reasoning (NAR)
    00:25:40 End-to-end learning vs NAR
    00:30:40 Improving Google Map predictions
    00:34:08 Interpretability
    00:41:28 Working at Google DeepMind
    00:47:25 Fundamental vs Applied side of research
    00:50:58 Industry vs Academia in AI Research
    00:54:25 Tips to young researchers
    01:05:55 Is a PhD required for AI research?

    More about Petar: https://petar-v.com/
    Graph Attention Networks: https://arxiv.org/abs/1710.10903
    Neural Algorithmic Reasoning: https://www.cell.com/patterns/pdf/S2666-3899(21)00099-4.pdf
    TacticAI paper: https://arxiv.org/abs/2310.10553
    And his collection of invited talks:  @petarvelickovic6033 

    About the Host:
    Jay is a PhD student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 1 hr 12 min
    Combining Vision & Language in AI perception and the era of LLMs & LMMs | Dr. Yezhou Yang

    Combining Vision & Language in AI perception and the era of LLMs & LMMs | Dr. Yezhou Yang

    Dr. Yezhou Yang is an Associate Professor at Arizona State University and director of the Active Perception Group at ASU. He has research interests in Cognitive Robotics and Computer Vision, and understanding human actions from visual input and grounding them by natural language. Prior to joining ASU, he completed his Ph.D. from the University of Maryland and his postdoctoral at the Computer Vision Lab and Perception and Robotics Lab.

    Timestamps of the conversation
    00:01:02 Introduction
    00:01:46 Interest in AI
    00:17:04 Entry in Robotics & AI Perception
    00:20:59 Combining Vision & language to Improve Robot Perception
    00:23:30 End-to-end learning vs traditional knowledge graphs
    00:28:28 What do LLMs learn?
    00:30:30 Nature of AI research
    00:36:00 Why vision & language in AI?
    00:45:40 Learning vs Reasoning in neural networks
    00:53:05 Bringing AI to the general crowd
    01:00:10 Transformers in Vision
    01:08:54 Democratization of AI
    01:13:42 Motivation for research: theory or application?
    01:18:50 Surpassing human intelligence
    01:25:13 Open challenges in computer vision research
    01:30:19 Doing research is a privilege
    01:35:00 Rejections, tips to read & write good papers
    01:43:37 Tips for AI Enthusiasts
    01:47:35 What is a good research problem?
    01:50:30 Dos and Don'ts in AI research

    More about Dr. Yang: https://yezhouyang.engineering.asu.edu/
    And his Twitter handle: https://twitter.com/Yezhou_Yang

    About the Host:
    Jay is a PhD student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Check-out Rora: https://teamrora.com/jayshah
    Guide to STEM PhD AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 1 hr 53 min
    Risks of AI in real-world and towards Building Robust Security measures | Hyrum Anderson

    Risks of AI in real-world and towards Building Robust Security measures | Hyrum Anderson

    Dr Hyrum Anderson is a Distinguished Machine Learning Engineer at Robust Intelligence. Prior to that, he was Principal Architect of Trustworthy Machine Learning at Microsoft where he also founded Microsoft’s AI Red Team; he also led security research at MIT Lincoln Laboratory, Sandia National Laboratories, and Mendiant, and was Chief Scientist at Endgame (later acquired by Elastic). He’s also the co-author of the book “Not a Bug, But with a Sticker” and his research interests include assessing the security and privacy of ML systems and building Robust AI models.


    Timestamps of the conversation
    00:50 Introduction
    01:40 Background in AI and ML security
    04:45 Attacks on ML systems
    08:20 Fractions of ML systems prone to Attacks
    10:38 Operational risks with security measures
    13:40 Solution from an algorithmic or policy perspective
    15:46 AI regulation and policy making
    22:40 Co-development of AI and security measures
    24:06 Risks of Generative AI and Mitigation
    27:45 Influencing an AI model
    30:08 Prompt stealing on ChatGPT
    33:50 Microsoft AI Red Team
    38:46 Managing risks
    39:41 Government Regulations
    43:04 What to expect from the Book
    46:40 Black in AI & Bountiful Children’s Foundation

    Check out Rora: https://teamrora.com/jayshah
    Guide to STEM Ph.D. AI Researcher + Research Scientist pay: https://www.teamrora.com/post/ai-researchers-salary-negotiation-report-2023
    Rora's negotiation philosophy:
    https://www.teamrora.com/post/the-biggest-misconception-about-negotiating-salaryhttps://www.teamrora.com/post/job-offer-negotiation-lies

    Hyrum's Linkedin: https://www.linkedin.com/in/hyrumanderson/
    And Research: https://scholar.google.com/citations?user=pP6yo9EAAAAJ&hl=en
    Book - Not a Bug, But with a Sticker: https://www.amazon.com/Not-Bug-But-Sticker-Learning/dp/1119883989/

    About the Host:
    Jay is a Ph.D. student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 51 min
    Being aware of Systematic Biases and Over-trust in AI | Meredith Broussard

    Being aware of Systematic Biases and Over-trust in AI | Meredith Broussard

    Meredith is an associate professor at New York University and research director at the NYU Alliance for Public Interest Technology. Her research interests include using data analysis for good and ethical AI. She is also the author of the book “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and we will discuss more about this with her in this podcast.

    Time stamps of the conversation
    00:42 Introduction
    01:17 Background
    02:17 Meaning of “it is not a glitch” in the book title
    04:40 How are biases coded into AI systems?
    08:45 AI is not the solution to every problem
    09:55 Algorithm Auditing
    11:57 Why do organizations don't use algorithmic auditing more often?
    15:12 Techno-chauvinism and drawing boundaries
    23:18 Bias issues with ChatGPT and Auditing the model
    27:55 Using AI for Public Good - AI on context
    31:52 Advice to young researchers in AI

    Meredith's homepage: https://meredithbroussard.com/
    And her Book: https://mitpress.mit.edu/9780262047654/more-than-a-glitch/

    About the Host:
    Jay is a Ph.D. student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 37 min
    P2 Working at DeepMind, Interview Tips & doing a PhD for a career in AI | Dr. David Stutz

    P2 Working at DeepMind, Interview Tips & doing a PhD for a career in AI | Dr. David Stutz

    Part-2 of my podcast with David Stutz. (Part-1: https://youtu.be/J7hzMYUcfto)
    David is a research scientist at DeepMind working on building robust and safe deep learning models. Prior to joining DeepMind, he was a PhD student at the Max Plank Institute of Informatics. He also maintains a fantastic blog on various topics related to machine learning and graduate life which is insightful to young researchers out there.

    00:00:00 Working at DeepMind
    00:08:20 Importance of Abstraction and Collaboration in Research
    00:13:08 DeepMind internship project
    00:19:39 What drives research projects at DeepMind
    00:27:45 Research in Industry vs Academia
    00:30:45 Interview tips for research roles, at DeepMind or other companies
    00:44:38 Finding the right Advisor & Institute for PhD
    01:02:12 Do you really need a Ph.D. to do AI/ML research?
    01:08:28 Academia vs Industry: Making the choice
    01:10:49 Pressure to publish more papers
    01:21:35 Artificial General Intelligence (AGI)
    01:33:24 Advice to young enthusiasts on getting started

    David's Homepage: https://davidstutz.de/
    And his blog: https://davidstutz.de/category/blog/
    Research work: https://scholar.google.com/citations?user=TxEy3cwAAAAJ&hl=en

    About the Host:
    Jay is a Ph.D. student at Arizona State University.
    Linkedin: https://www.linkedin.com/in/shahjay22/
    Twitter: https://twitter.com/jaygshah22
    Homepage: https://www.public.asu.edu/~jgshah1/ for any queries.

    Stay tuned for upcoming webinars!

    ***Disclaimer: The information contained in this video represents the views and opinions of the speaker and does not necessarily represent the views or opinions of any institution. It does not constitute an endorsement by any Institution or its affiliates of such video content.***

    • 1 hr 42 min

Customer Reviews

5.0 out of 5
14 Ratings

14 Ratings

Ekado ,

Practical real world applications

As a ds, i really appreciate the conversations you have with your guests. They help me think through problems i’m working on and understand areas of ds i’m unfamiliar with. Really love the work you’re putting out into the world! Thank you!

jangadi ,

AI podcast

One the best podcast series to get started with Machine Learning and AI for beginners. Simply explained and would highly recommend to students and researchers.

Priyanka Komala ,

Passionate ML host

Jay Shah is an energetic Machine Learning expert who brings the best guests on his show to unveil nuances of ML. His engaging conversation style makes this podcast a must hear on your playlist.

Top Podcasts In Science

Hidden Brain
Hidden Brain, Shankar Vedantam
Something You Should Know
Mike Carruthers | OmniCast Media | Cumulus Podcast Network
Crash Course Pods: The Universe
Crash Course Pods, Complexly
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Radiolab
WNYC Studios
Ologies with Alie Ward
Alie Ward