Sebastian Pokutta's Blog

Mathematics and related topics

The Netflix Prize – shootout on the finish line

leave a comment »

From a New York times article:

After nearly three years and entries from more than 50,000 contestants, a multinational team says that it has met the requirements to win the million-dollar Netflix Prize: It developed powerful algorithms that improve the movie recommendations made by Netflix’s existing software by more than 10 percent.

The online movie rental service uses its Cinematch software to analyze each customer’s film-viewing habits and recommends other movies that customer might enjoy. Because accurate recommendations increase Netflix’s appeal to its customers, the movie rental company started a contest in October 2006, offering $1 million to the first contestant that could improve the predictions by at least 10 percent.

On June 26th, 2009 the team “BellKor’s Pragmatic Chaos” were the first to submit a solution that improves more than 10% over the cinematch algorithm used by Netflix to match customers and movies. This triggered a final 30 days period in which other teams have the chance to beat that submission – the final winners will be the one with the best improvement. It looked very much like the BellKor’s Pragmatic Chaos would be the winners until yesterday… but then, suddenly a new team “The Ensemble” popped up out of nothingness (more or less – it is actually a collaborative efforts of several other teams) and made a submission on July 25th, 18:32:29 which outperforms the one of  BellKor’s Pragmatic Chaos by a tiny fraction. Snapshot of the leaderboard:

Picture 2

Given that the 30 days period was triggered on June 26th and depending on day counting convention this looks very much like a shootout on the finish line. Maybe, who knows, there is another team lurking in the dark making a last minute submission? Stay tuned!

Update 26.07.2009: We have new submissions and the match continues:

Picture 2

Update 26.07.2009: Game over

Contest Closed

Thank you for your interest in the Netflix Prize.

We are delighted to report that, after almost three years and more than 43,000 entries from over 5,100 teams in over 185 countries, the Netflix Prize Contest stopped accepting entries on 2009-07-26 18:42:37 UTC. The closing of the contest is in accordance with the Rules — thirty (30) days after a submitted prediction set achieved the Grand Prize qualifying RMSE on the quiz subset.

Team registration, team updates and the dataset download are also closed. The Contest Forum and Leaderboard remain open.

Qualified entries will be evaluated as described in the Rules. We look forward to awarding the Grand Prize, which we expect to announce in a few weeks. However if a Grand Prize cannot be awarded because no submission can be verified by the judges, the Contest will reopen. We will make an announcement on the Forum after the Contest judges reach a decision.

Once the Grand Prize is awarded, the ratings for the qualifying set will be released and the combined training data and qualifying sets will become available upon request at the Machine Learning Archive at UC Irvine.

Thank you again for your interest in the Netflix Prize. Keep checking this site for updates in the coming weeks.

Update 26.07.2009: There are several rumors spreading that the final winner is not yet determined as the score posted online is the one for a data set that is used for reporting the performance (only), whereas netflix uses a different one internally to do the real performance judgment. From the FAQ:

Why this whole quiz/test subset structure? Why not reveal a submission’s RMSE on the test subset?

We wanted a way of informing you and your competitive colleagues about your progress toward a prize while making it difficult for you to simply train and optimize against “the answer oracle”. We also wanted a way for the judges to determine how robust your algorithm is. So we have you supply nearly 3 million predictions, then tell you and the world how you did on one half (the “quiz” subset) while we judge you on how you did on the other half (the “test” subset), without telling you that score or which prediction you make applies to which subset.

So it is possible that the story looks different on the “test” subset, especially given that both teams were so close together.

If you are interested in the math behind it, then have a look here! At the end of that article you will find additional links.

Written by Sebastian

July 26, 2009 at 10:11 am

Leave a comment