Class meeting #12 – Music recommendation algorithms II: Collaborative filtering – Monday 11/26



The principal assumption behind collaborative filtering for music recommendation is that your listening choices act as an implicit signal not just about your preferences, also the preferences of “listeners like you”. In a short repsonse based on your reading of the two papers (and skimming the slides prepared by an ex-Spotify employee), what are some of the challenges faced by the designers of recommendation systems that use collaborative filtering, and, if possible, suggest ways that this technique can be improved or used in tandem with other approaches to recommendation in order to overcome these problems?

6 thoughts on “Class meeting #12 – Music recommendation algorithms II: Collaborative filtering – Monday 11/26

  1. clj2142

    Levy and Bosteels show how collaborative filtering will often play popular songs considerably more than more obscure “long tail” songs. This reinforces the current popularity rankings and can inhibit discovery of newer artists. Bell, Koren, and Volinsky make note of the “cold start” problem, meaning that collaborative filtering is less effective when working with less input data. A new user or a user that rarely gives feedback to the songs they hear will often receive song suggestions less in line with their musical tastes than users who’ve provided more data. One possible solution to the cold start problem could possibly be to use personal data (such as the data Spotify sells to advertisers) in order to create a profile of the user and recommend them songs that fit the tastes of similar profiles, though this method would probably also be less effective with new users.

  2. erc2175

    Just to start, I understood practically nothing of the reading because I am not accustomed to reading things where the central conceptual ideas are basically mathematical. So, I am looking forward to discussing this in class using words that normal people use.

    Anyway, it seems like seems like the basic objective of recommender systems is to have the systems influence user listening patterns, which would show that they are actually working. Different authors had different objectives. The Spotify presentation gave the most general outline of possible different objectives, but it emphasized stuff like scalability (weak-ish, and required combination with other models, SO I THINK…) and the amount of space it takes up and stuff like that. So, the challenges to overcome focus on these things. Of course I have no solutions for them, so continuing on… Bell et al. mention another problem with collaborative filtering, the “cold start” issue, and it seems like Spotify has already overcome this issue by having a decision tree decide which recommender it is going to use; it probably uses a non-collaborative filtering-based one for new users, which is a smart idea. And Mark Levy et al. focus on “the long tail,” or a collection of many less-popular “products” (or whatever). According to them collaborative filtering is pretty good at getting at this long tail, so good for them.

    What should they do with collaborative filtering? I don’t know, just don’t use it all the time, I guess. Looking forward to class tomorrow when I can hopefully make some sense of all this.

  3. lnl2110

    Levy/Bosteels write that the problem with collaborative filtering is that popular artists become recommended disproportionately. Bell/Koren/Volinsky write that collaborative filtering suffers from the problem of a cold start, which means that the system cannot address new users and new products. The collaborative filtering methods this paper emphasizes are mostly also supplemented by direct feedback, such as ratings, that the user provides. The slides that Bernahardsson presented also suggest incorporating direct feedback, such as skips and thumbs ups. Bell/Koren/Volinsky calls this “behavioral input” and “implicit feedback.” These types of feedback can also be compared between users who have crossovers with other users.

  4. yh2825

    There are two aspects of collaborative filtering that possibly points out the weakness of the approach. The first is that when users do not have enough input data (i.e. user is new to the software etc) in the beginning, collaborative filtering would find it hard to make recommendations. One way to improve this is give new users some genre options in the beginning (like EDM, Hip Hop or Country) and then recommend artists within that genre given the statistical probabilities of popularity. However, this could also be problematic and it also relates to the second weakness of collaborative filtering – if the popular artists gets recommended more often and users enjoyed the recommendation (which is likely since popular artists probably produce higher quality music), how do other less popular artists gets the “attention” from collaborative filtering and emerge in the recommendation list? This is related to the popularity bias mentioned in the long tail article. The long tail theory would suggest that 80% of the song plays are from the most popular top 20% artists while the rest 80% plays are from the other 80% artists. It is likely that, through collaborative filtering, the recommendation would largely stay within the songs that are played the most. I think it is also the reason that many music streaming services provide its users with the explore section and therefore give “long tail” artists more weight. In conclusion, it is very hard to use a single statistical model to generalize music preference, and collaborative filtering should be combined with other methods in order to improve the quality of recommendations.

  5. spn2120

    Collaborative filtering relies on a user’s past listening/viewing information to choose new products that the user might enjoy. However, as the Netflix article points out, collaborative filtering suffers from the “cold start” problem, in which the recommender system fails to produce outputs without prior information. Furthermore, users that rate products similarly, may be the source of the recommender’s output.The Spotify slides show that combining many models helps provide a wider view of the situation, given that every algorithm may be affected by biases, making it harder to optimize for a specific variable. Collaborative feedback may be used in tandem with methods of testing implicit feedback, such as a user’s scrolling or mouse placement, or even offline testing involving surveying what type of content the user enjoys. While usual surveys mostly ask about genre, there could be an algorithm that works in a similar way to a conversation partner, asking the user what they enjoy feeling or their preferred emotional response when engaging with a movie or music. Certain keywords in their response could be compared with other user’s responses and product history to give good responses. Essentially, the idea here is that a user’s listening experience is not determined by genre or content specifically, but what they desire to feel when engaging with the material. Users may watch Saving Private Ryan for different reasons, which is why the filtering may be inaccurate.

  6. ijg2112

    Recommendation systems are difficult to design because of all of the different input factors that must be accounted for. Collaborative filtering compares a person’s activity to other users to decide what is an appropriate recommendation, but a problem is encountered if a user is new and has little to no data for comparison. A possible solution to this is to import the user’s previous music libraries to start to form their user profile. Another possible issue is lack of diversity in recommendations. This could occur during collaborating filtering because it is more likely that a majority of similar users will have popular artists in common, so the recommendation system will suggest that popular artist because it seems to have a high success rate within that group of users. However, if the system makes recommendations like this, it is likely that smaller artists will get recommended infrequently. The algorithm that Levy and Bosteels tested could be a possible solution, but with some changes. For example, the algorithm could select the most appropriate artist for the user from each popularity rank so that the user gets unique recommendations from artists of a variety of sizes.

Leave a Reply