22 episodes

Recommender Systems are the most challenging, powerful and ubiquitous area of machine learning and artificial intelligence. This podcast hosts the experts in recommender systems research and application. From understanding what users really want to driving large-scale content discovery - from delivering personalized online experiences to catering to multi-stakeholder goals. Guests from industry and academia share how they tackle these and many more challenges. With Recsperts coming from universities all around the globe or from various industries like streaming, ecommerce, news, or social media, this podcast provides depth and insights. We go far beyond your 101 on RecSys and the shallowness of another matrix factorization based rating prediction blogpost! The motto is: be relevant or become irrelevant!
Expect a brand-new interview each month and follow Recsperts on your favorite podcast player.

Recsperts - Recommender Systems Experts Marcel Kurovski

    • Technology
    • 5.0 • 1 Rating

Recommender Systems are the most challenging, powerful and ubiquitous area of machine learning and artificial intelligence. This podcast hosts the experts in recommender systems research and application. From understanding what users really want to driving large-scale content discovery - from delivering personalized online experiences to catering to multi-stakeholder goals. Guests from industry and academia share how they tackle these and many more challenges. With Recsperts coming from universities all around the globe or from various industries like streaming, ecommerce, news, or social media, this podcast provides depth and insights. We go far beyond your 101 on RecSys and the shallowness of another matrix factorization based rating prediction blogpost! The motto is: be relevant or become irrelevant!
Expect a brand-new interview each month and follow Recsperts on your favorite podcast player.

    #21: User-Centric Evaluation and Interactive Recommender Systems with Martijn Willemsen

    #21: User-Centric Evaluation and Interactive Recommender Systems with Martijn Willemsen

    In episode 21 of Recsperts, we welcome Martijn Willemsen, Associate Professor at the Jheronimus Academy of Data Science and Eindhoven University of Technology. Martijn's researches on interactive recommender systems which includes aspects of decision psychology and user-centric evaluation. We discuss how users gain control over recommendations, how to support their goals and needs as well as how the user-centric evaluation framework fits into all of this.
    In our interview, Martijn outlines the reasons for providing users control over recommendations and how to holistically evaluate the satisfaction and usefulness of recommendations for users goals and needs. We discuss the psychology of decision making with respect to how well or not recommender systems support it. We also dive into music recommender systems and discuss how nudging users to explore new genres can work as well as how longitudinal studies in recommender systems research can advance insights.
    Towards the end of the episode, Martijn and I also discuss some examples and the usefulness of enabling users to provide negative explicit feedback to the system.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review

    (00:00) - Introduction
    (03:03) - About Martijn Willemsen
    (15:14) - Waves of User-Centric Evaluation in RecSys
    (19:35) - Behaviorism is not Enough
    (46:21) - User-Centric Evaluation Framework
    (01:05:38) - Genre Exploration and Longitudinal Studies in Music RecSys
    (01:20:59) - User Control and Negative Explicit Feedback
    (01:31:50) - Closing Remarks
    Links from the Episode:Martijn Willemsen on LinkedInMartijn Willemsen's WebsiteUser-centric Evaluation FrameworkBehaviorism is not Enough (Talk at RecSys 2016)Neil Hunt: Quantifying the Value of Better Recommendations (Keynote at RecSys 2014)What recommender systems can learn from decision psychology about preference elicitation and behavioral change (Talk at Boise State (Idaho) and Grouplens at University of Minnesota)Eric J. Johnson: The Elements of ChoiceRasch ModelSpotify Web APIPapers:
    Ekstrand et al. (2016): Behaviorism is not Enough: Better Recommendations Through Listening to UsersKnijenburg et al. (2012): Explaining the user experience of recommender systemsEkstrand et al. (2014): User perception of differences in recommender algorithmsLiang et al. (2022): Exploring the longitudinal effects of nudging on users’ music genre exploration behavior and listening preferencesMcNee et al. (2006): Being accurate is not enough: how accuracy metrics have hurt recommender systemsGeneral Links:
    Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

    • 1 hr 35 min
    #20: Practical Bandits and Travel Recommendations with Bram van den Akker

    #20: Practical Bandits and Travel Recommendations with Bram van den Akker

    In episode 20 of Recsperts, we welcome Bram van den Akker, Senior Machine Learning Scientist at Booking.com. Bram's work focuses on bandit algorithms and counterfactual learning. He was one of the creators of the Practical Bandits tutorial at the World Wide Web conference. We talk about the role of bandit feedback in decision making systems and in specific for recommendations in the travel industry.
    In our interview, Bram elaborates on bandit feedback and how it is used in practice. We discuss off-policy- and on-policy-bandits, and we learn that counterfactual evaluation is right for selecting the best model candidates for downstream A/B-testing, but not a replacement. We hear more about the practical challenges of bandit feedback, for example the difference between model scores and propensities, the role of stochasticity or the nitty-gritty details of reward signals. Bram also shares with us the challenges of recommendations in the travel domain, where he points out the sparsity of signals or the feedback delay.
    At the end of the episode, we can both agree on a good example for a clickbait-heavy news service in our phones.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review

    (00:00) - Introduction
    (02:58) - About Bram van den Akker
    (09:16) - Motivation for Practical Bandits Tutorial
    (16:53) - Specifics and Challenges of Travel Recommendations
    (26:19) - Role of Bandit Feedback in Practice
    (49:13) - Motivation for Bandit Feedback
    (01:00:54) - Practical Start for Counterfactual Evaluation
    (01:06:33) - Role of Business Rules
    (01:11:26) - better cut this section coherently
    (01:17:48) - Rewards and More
    (01:32:45) - Closing Remarks
    Links from the Episode:Bram van den Akker on LinkedInPractical Bandits: An Industry Perspective (Website)Practical Bandits: An Industry Perspective (Recording)Tutorial at The Web Conference 2020: Unbiased Learning to Rank: Counterfactual and Online ApproachesTutorial at RecSys 2021: Counterfactual Learning and Evaluation for Recommender Systems: Foundations, Implementations, and Recent AdvancesGitHub: Open Bandit PipelinePapers:
    van den Akker et al. (2023): Practical Bandits: An Industry Perspectivevan den Akker et al. (2022): Extending Open Bandit Pipeline to Simulate Industry Challengesvan den Akker et al. (2019): ViTOR: Learning to Rank Webpages Based on Visual FeaturesGeneral Links:
    Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel.kurovski@gmail.comRecsperts Website

    • 1 hr 45 min
    #19: Popularity Bias in Recommender Systems with Himan Abdollahpouri

    #19: Popularity Bias in Recommender Systems with Himan Abdollahpouri

    In episode 19 of Recsperts, we welcome Himan Abdollahpouri who is an Applied Research Scientist for Personalization & Machine Learning at Spotify. We discuss the role of popularity bias in recommender systems which was the dissertation topic of Himan. We talk about multi-objective and multi-stakeholder recommender systems as well as the challenges of music and podcast streaming personalization at Spotify.
    In our interview, Himan walks us through popularity bias as the main cause of unfair recommendations for multiple stakeholders. We discuss the consumer- and provider-side implications and how to evaluate popularity bias. Not the sheer existence of popularity bias is the major problem, but its propagation in various collaborative filtering algorithms. But we also learn how to counteract by debiasing the data, the model itself, or it's output. We also hear more about the relationship between multi-objective and multi-stakeholder recommender systems.
    At the end of the episode, Himan also shares the influence of popularity bias in music and podcast streaming at Spotify as well as how calibration helps to better cater content to users' preferences.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review

    (00:00) - Introduction
    (04:43) - About Himan Abdollahpouri
    (15:23) - What is Popularity Bias and why is it important?
    (25:05) - Effect of Popularity Bias in Collaborative Filtering
    (30:30) - Individual Sensitivity towards Popularity
    (36:25) - Introduction to Bias Mitigation
    (53:16) - Content for Bias Mitigation
    (56:53) - Evaluating Popularity Bias
    (01:05:01) - Popularity Bias in Music and Podcast Streaming
    (01:08:04) - Multi-Objective Recommender Systems
    (01:16:13) - Multi-Stakeholder Recommender Systems
    (01:18:38) - Recommendation Challenges at Spotify
    (01:35:16) - Closing Remarks
    Links from the Episode:Himan Abdollahpouri on LinkedInHiman Abdollahpouri on XHiman's WebsiteHiman's PhD Thesis on "Popularity Bias in Recommendation: A Multi-stakeholder Perspective"2nd Workshop on Multi-Objective Recommender Systems (MORS @ RecSys 2022)Papers:
    Su et al. (2009): A Survey on Collaborative Filtering TechniquesMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsAbdollahpouri et al. (2021): User-centered Evaluation of Popularity Bias in Recommender SystemsAbdollahpouri et al. (2019): The Unfairness of Popularity Bias in RecommendationAbdollahpouri et al. (2017): Controlling Popularity Bias in Learning-to-Rank RecommendationWasilewsi et al. (2016): Incorporating Diversity in a Learning to Rank Recommender SystemOh et al. (2011): Novel Recommendation Based on Personal Popularity TendencySteck (2018): Calibrated RecommendationsAbdollahpouri et al. (2023): Calibrated Recommendations as a Minimum-Cost Flow ProblemSeymen et al. (2022): Making smart recommendations for perishable and stockout productsGeneral Links:
    Follow me on LinkedInFollow me on XSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website

    • 1 hr 41 min
    #18: Recommender Systems for Children and non-traditional Populations

    #18: Recommender Systems for Children and non-traditional Populations

    In episode 18 of Recsperts, we hear from Professor Sole Pera from Delft University of Technology. We discuss the use of recommender systems for non-traditional populations, with children in particular. Sole shares the specifics, surprises, and subtleties of her research on recommendations for children.
    In our interview, Sole and I discuss use cases and domains which need particular attention with respect to non-traditional populations. Sole outlines some of the major challenges like lacking public datasets or multifaceted criteria for the suitability of recommendations. The highly dynamic needs and abilities of children pose proper user modeling as a crucial part in the design and development of recommender systems. We also touch on how children interact differently with recommender systems and learn that trust plays a major role here.
    Towards the end of the episode, we revisit the different goals and stakeholders involved in recommendations for children, especially the role of parents. We close with an overview of the current research community.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review

    (00:00) - Introduction
    (04:56) - About Sole Pera
    (06:37) - Non-traditional Populations
    (09:13) - Dedicated User Modeling
    (25:01) - Main Application Domains
    (40:16) - Lack of Data about non-traditional Populations
    (47:53) - Data for Learning User Profiles
    (57:09) - Interaction between Children and Recommendations
    (01:00:26) - Goals and Stakeholders
    (01:11:35) - Role of Parents and Trust
    (01:17:59) - Evaluation
    (01:26:59) - Research Community
    (01:32:37) - Closing Remarks
    Links from the Episode:Sole Pera on LinkedInSole's WebsiteChildren and RecommendersKidRec 2022People and Information Retrieval Team (PIReT)Papers:
    Beyhan et al. (2023): Covering Covers: Characterization Of Visual Elements Regarding SleevesMurgia et al. (2019): The Seven Layers of Complexity of Recommender Systems for Children in Educational ContextsPera et al. (2019): With a Little Help from My Friends: User of Recommendations at SchoolCharisi et al. (2022): Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and PolicyGómez et al. (2021): Evaluating recommender systems with and for children: towards a multi-perspective frameworkNg et al. (2018): Recommending social-interactive games for adults with autism spectrum disorders (ASD)General Links:
    Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website

    • 1 hr 39 min
    #17: Microsoft Recommenders and LLM-based RecSys with Miguel Fierro

    #17: Microsoft Recommenders and LLM-based RecSys with Miguel Fierro

    In episode 17 of Recsperts, we meet Miguel Fierro who is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. We talk about the Microsoft recommenders repository with over 15k stars on GitHub and discuss the impact of LLMs on RecSys. Miguel also shares his view of the T-shaped data scientist.
    In our interview, Miguel shares how he transitioned from robotics into personalization as well as how the Microsoft recommenders repository started. We learn more about the three key components: examples, library, and tests. With more than 900 tests and more than 30 different algorithms, this library demonstrates a huge effort of open-source contribution and maintenance. We hear more about the principles that made this effort possible and successful. Therefore, Miguels also shares the reasoning behind evidence-based design to put the users of microsoft-recommenders and their expectations first. We also discuss the impact that recent LLM-related innovations have on RecSys.
    At the end of the episode, Miguel explains the T-shaped data professional as an advice to stay competitive and build a champion data team. We conclude with some remarks regarding the adoption and ethical challenges recommender systems pose and which need further attention.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Don't forget to follow the podcast and please leave a review

    (00:00) - Episode Overview
    (03:34) - Introduction Miguel Fierro
    (16:19) - Microsoft Recommenders Repository
    (30:04) - Structure of MS Recommenders
    (34:16) - Contributors to MS Recommenders
    (37:10) - Scalability of MS Recommenders
    (39:32) - Impact of LLMs on RecSys
    (48:26) - T-shaped Data Professionals
    (53:29) - Further RecSys Challenges
    (59:28) - Closing Remarks
    Links from the Episode:Miguel Fierro on LinkedInMiguel Fierro on TwitterMiguel's WebsiteMicrosoft RecommendersMcKinsey (2013): How retailers can keep up with consumersFortune (2012): Amazon's recommendation secretRecSys 2021 Keynote by Max Welling: Graph Neural Networks for Knowledge Representation and RecommendationPapers:
    Geng et al. (2022): Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)General Links:
    Follow me on LinkedInFollow me on TwitterSend me your comments, questions and suggestions to marcel@recsperts.comRecsperts Website

    • 1 hr 2 min
    #16: Fairness in Recommender Systems with Michael D. Ekstrand

    #16: Fairness in Recommender Systems with Michael D. Ekstrand

    In episode 16 of Recsperts, we hear from Michael D. Ekstrand, Associate Professor at Boise State University, about fairness in recommender systems. We discuss why fairness matters and provide an overview of the multidimensional fairness-aware RecSys landscape. Furthermore, we talk about tradeoffs, methods and receive practical advice on how to get started with tackling unfairness.
    In our discussion, Michael outlines the difference and similarity between fairness and bias. We discuss several stages at which biases can enter the system as well as how bias can indeed support mitigating unfairness. We also cover the perspectives of different stakeholders with respect to fairness. We also learn that measuring fairness depends on the specific fairness concern one is interested in and that solving fairness universally is highly unlikely.
    Towards the end of the episode, we take a look at further challenges as well as how and where the upcoming RecSys 2023 provides a forum for those interested in fairness-aware recommender systems.
    Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.

    (00:00) - Episode Overview
    (02:57) - Introduction Michael Ekstrand
    (17:08) - Motivation for Fairness-Aware Recommender Systems
    (25:45) - Overview and Definition of Fairness in RecSys
    (46:51) - Distributional and Representational Harm
    (53:59) - Relationship between Fairness and Bias
    (01:04:43) - Tradeoffs
    (01:13:36) - Methods and Metrics for Fairness
    (01:28:06) - Practical Advice for Tackling Unfairness
    (01:32:24) - Further Challenges
    (01:35:24) - RecSys 2023
    (01:38:29) - Closing Remarks
    Links from the Episode:Michael Ekstrand on LinkedInMichael Ekstrand on MastodonMichael's WebsiteGroupLens Lab at University of MinnesotaPeople and Information Research Team (PIReT)6th FAccTRec Workshop: Responsible RecommendationNORMalize: The First Workshop on Normative Design and Evaluation of Recommender SystemsACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)Coursera: Recommender Systems SpecializationLensKit: Python Tools for Recommender SystemsChris Anderson - The Long Tail: Why the Future of Business Is Selling Less of MoreFairness in Recommender Systems (in Recommender Systems Handbook)Ekstrand et al. (2022): Fairness in Information Access SystemsKeynote at EvalRS (CIKM 2022): Do You Want To Hunt A Kraken? Mapping and Expanding Recommendation FairnessFriedler et al. (2021): The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision MakingSafiya Umoja Noble (2018): Algorithms of Oppression: How Search Engines Reinforce RacismPapers:
    Ekstrand et al. (2018): Exploring author gender in book rating and recommendationEkstrand et al. (2014): User perception of differences in recommender algorithmsSelbst et al. (2019): Fairness and Abstraction in Sociotechnical SystemsPinney et al. (2023): Much Ado About Gender: Current Practices and Future Recommendations for Appropriate Gender-Aware Information AccessDiaz et al. (2020): Evaluating Stochastic Rankings with Expected ExposureRaj et al. (2022): Fire Dragon and Unicorn Princess; Gender Stereotypes and Children's Products in Search Engine ResponsesMitchell et al. (2021): Algorithmic Fairness: Choices, Assumptions, and DefinitionsMehrotra et al. (2018): Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommender SystemsRaj et al. (2022): Measuring Fairness in Ranked Results: An Analytical and Empirical ComparisonBeutel et al. (2019): Fairness in Recommendation Ranking through Pairwise ComparisonsBeutel et al. (2017): Data Decisions and Theoretical Implications when Adversarially Learning Fair RepresentationsDwork et al. (2018): Fairness Under CompositionBower et al. (2022): Random Isn't Always Fair: Candidate Set Imbalance and Exposure Inequality in Recommender SystemsZehlike et al. (2022): Fairness in Ranking: A SurveyHoffmann (2019): Where fairness fails: data, algorit

    • 1 hr 42 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Deep Questions with Cal Newport
Cal Newport
Dwarkesh Podcast
Dwarkesh Patel
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times

You Might Also Like

Gradient Dissent: Conversations on AI
Lukas Biewald
Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
The Lowe Post
ESPN, Zach Lowe
The Bill Simmons Podcast
The Ringer
Americast
BBC Radio