This post followed following tutorial: http://www.albertauyeung.com/post/python-matrix-factorization/. The author demonstrated how a deterministic algorithm’s implementation can be achieved with a nondeterministic way of programming.
Matrix factorization is to, find out two or more matrices such that when you multiply them you will get back the original matrix. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. One obvious application is to predict ratings in collaborative filtering.
In a recommendation system such as Netflix or MovieLens, there is a group of users and a set of items (movies for the above two systems). Given that each user has rated some items in the system, we would like to predict how the users would rate the items that they have not yet rated, such that we can make recommendations to the users. In this case, all the information we have about the existing ratings can be represented in a matrix. Each cell of the matrix presents which rating a user has given a film. An empty cell means that the user has not yet rated the movie.
Hence, the task of predicting the missing ratings can be considered as filling in the blanks such that the values would be consistent with the existing ratings in the matrix.
The intuition behind using matrix factorization to solve this problem is that there should be some latent features that determine how a user rates an item. For example, two users would give high ratings to a certain movie if they both like the actors or actresses in the movie, or if the movie is an action movie, which is a genre preferred by both users.
Hence, if we can discover these latent features, we should be able to predict a rating with respect to a certain user and a certain item, because the features associated with the user should match with the features associated with the item.
Having discussed the intuition behind matrix factorization, we can now go on to work on mathematics. Firstly, we have a set U of users and a set D of items. Let R of size be the matrix that contains all the ratings that the users have assigned to the items. Also, we assume that we would like to discover K latent features. Our task, then, is to find two matrics matrices (of size ) and (of size ) such that their product approximates |R|.
In this way, each row of would represent the strength of the associations between a user and the features. Similarly, each row of would represent the strength of the associations between an item and the features. To get the prediction of a rating of an item dj by ui, we can calculate the dot product of their vectors.
Now, we have to find a way to obtain and . One way to approach this problem is the first initialize the two matrices with some values, calculate how different their product is to , and then try to minimize this difference iteratively. Such a method is called gradient descent, aiming at finding a local minimum of the difference.
The difference here, usually called the error between the estimated rating and the real rating, can be calculated by the following equation for each user-item pair:
Here we consider the squared error because the estimated rating can be either higher or lower than the real rating.
To minimize the error, we have to know in which direction we have to modify the values of and . In other words, we need to know the gradient at the current values, and therefore we differentiate the above equation with respect to these two variables separately:
Having obtained the gradient, we can now formulate the update rules for both and :
A question might have come to your mind by now: if we find two matrices and such that approximates , isn’t that our predictions of all the unseen ratings will be zeros? In fact, we are not really trying to come up with and such that we can reproduce exactly. Instead, we will only try to minimise the errors of the observed user-item pairs. In other words, if we let be a set of tuples, each of which is in the form of , such that contains all the observed user-item pairs together with the associated ratings, we are only trying to minimise every for . (In other words, is our set of training data.) As for the rest of the unknowns, we will be able to determine their values once the associations between the users, items and features have been learnt.