This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.

Programming Challenges

Feed Optimizer

Quora shows a customized feed of recent stories on a user's home page. Stories in Quora refer to activities that happen on the site, for example, when a user posts a note, adds a question, or upvotes an answer. We score each story based on its type and other characteristics, and this score represents the value we think a story brings to the user. We want to be able to generate quickly the feed of best and most recent stories for the user every time they reload their home page.

Your task will be to design the algorithm that picks the stories to display in the feed.

You are given a list of stories, each having a time of publication, a score and a certain height in pixels that it takes to display the story. Given the total number of pixels in the browser available for displaying the feed, you want to maximize the sum of scores for the stories that you can display in the feed at each time the user reloads their home page. We only want to consider recent stories, so only stories that were published in a recent time window from the time of reload should be considered. You do not have to use up all the pixels in the browser.

Input format (read from STDIN):
The first line of input will be 3 positive integers: N the number of events, W the time window representing the window of recent stories, and H the height of the browser in pixels.

There will be N lines following that, each beginning with ‘S’ if it is a story event, or ‘R’ if it is a reload event.  A story event will have 3 positive integers: the time of publication, the score and the height in pixels of that story. A reload event will have 1 positive integer: the time of reload.

The events will always be in chronological order, and no two events will happen at the same time.

For example:
9 10 100
S 11 50 30
R 12
S 13 40 20
S 14 45 40
R 15
R 16
S 18 45 20
R 21
R 22

Output format (write to STDOUT):
For each reload event given in the input, you are to output a line of integers. First, the maximum score of stories you can show in the feed. This should be followed by the number of stories picked and the id number for each story picked, in increasing order. Stories are given an id starting from 1 in the order of their time of publication.
If two sets of stories give the same score, choose the set with fewer stories. If there is still a tie, choose the set which has the lexicographically smaller set of ids, e.g. choose [1, 2, 5] over [1, 3, 4].

For example:
50 1 1
135 3 1 2 3
135 3 1 2 3
140 3 1 3 4
130 3 2 3 4

There are 4 stories (with ids 1 to 4) and 5 reload events. At the first reload, there is only one story with score of 50 available for display. The next two reloads, we have 3 stories that take up 90 of the 100 pixels available, for a total score of 135. When we reload at time 21, there are 4 stories available for choosing, but only 3 will fit into the browser height. The best set is [1, 3, 4] for a total score of 140. At the last reload event, we no longer consider story 1 when choosing stories because it is more than 10 time units old. The best set is thus [2, 3, 4].

All times are positive integers up to 1,000,000,000.
All scores are positive integers up to 1,000,000.
All heights are positive integers.
0 < N <= 10,000
0 < H <= 2,000
0 < W <= 2,000

You should aim to have your algorithm be fast enough to solve our largest test inputs in under 5 seconds, or be as close to that as possible.

For this contest, we are considering a smaller-scale version of our feed optimization problem with simplified constraints. (The actual feed on our site uses a different algorithm.)

Answer Classifier

Quora uses a combination of machine learning (ML) algorithms and moderation to ensure high-quality content on the site. High answer quality has helped Quora distinguish itself from other Q&A sites on the web. 

Your task will be to devise a classifier that is able to tell good answers from bad answers, as well as humans can.  A good answer is denoted by a +1 in our system, and a bad answer is denoted by a -1.

Input format (read from STDIN):
The first line contains N, M. N = Number of training data records, M = number of parameters. Followed by N lines containing records of training data. Then one integer q, q = number of records to be classified, followed by q lines of query data

Training data corresponds to the following format:
<answer-identifier> <+1 or -1> (<feature-index>:<feature-value>)*

Query data corresponds to the following format:
<answer-identifier> (<feature-index>:<feature-value>)*

The answer identifier  is an alphanumeric string of no more than 10 chars.  Each identifier is guaranteed unique.  All feature values are doubles.
0 < M < 100
0 < N < 50,000
0 < q < 5,000

For example:
5 23
2LuzC +1 1:2101216030446 2:1.807711 3:1 4:4.262680 5:4.488636 6:87.000000 7:0.000000 8:0.000000 9:0 10:0 11:3.891820 12:0 13:1 14:0 15:0 16:0 17:1 18:1 19:0 20:2 21:2.197225 22:0.000000 23:0.000000
LmnUc +1 1:99548723068 2:3.032810 3:1 4:2.772589 5:2.708050 6:0.000000 7:0.000000 8:0.000000 9:0 10:0 11:4.727388 12:5 13:1 14:0 15:0 16:1 17:1 18:0 19:0 20:9 21:2.833213 22:0.000000 23:0.000000
ZINTz -1 1:3030695193589 2:1.741764 3:1 4:2.708050 5:4.248495 6:0.000000 7:0.000000 8:0.000000 9:0 10:0 11:3.091042 12:1 13:1 14:0 15:0 16:0 17:1 18:1 19:0 20:5 21:2.564949 22:0.000000 23:0.000000
gX60q +1 1:2086220371355 2:1.774193 3:1 4:3.258097 5:3.784190 6:0.000000 7:0.000000 8:0.000000 9:0 10:0 11:3.258097 12:0 13:1 14:0 15:0 16:0 17:1 18:0 19:0 20:5 21:2.995732 22:0.000000 23:0.000000
5HG4U -1 1:352013287143 2:1.689824 3:1 4:0.000000 5:0.693147 6:0.000000 7:0.000000 8:0.000000 9:0 10:1 11:1.791759 12:0 13:1 14:1 15:0 16:1 17:0 18:0 19:0 20:4 21:2.197225 22:0.000000 23:0.000000
PdxMK 1:340674897225 2:1.744152 3:1 4:5.023881 5:7.042286 6:0.000000 7:0.000000 8:0.000000 9:0 10:0 11:3.367296 12:0 13:1 14:0 15:0 16:0 17:0 18:0 19:0 20:12 21:4.499810 22:0.000000 23:0.000000
ehZ0a 1:2090062840058 2:1.939101 3:1 4:3.258097 5:2.995732 6:75.000000 7:0.000000 8:0.000000 9:0 10:0 11:3.433987 12:0 13:1 14:0 15:0 16:1 17:0 18:0 19:0 20:3 21:2.639057 22:0.000000 23:0.000000

This data is completely anonymized and extracted from real production data, and thus will not include the raw form of the
answers. We, however, have extracted as many features as we think are useful, and you can decide which features make sense to be included in your final algorithm. The actual labeling of a good answer and bad answer is done organically on our site, through human moderators.

Output format (write to STDOUT):
For each query, you should output q lines to stdout, representing the decision made by your classifier, whether each answer is good or not:

<answer-identifier> <+1 or -1>

For example:

PdxMK +1
ehZ0a -1

You are given a relative large sample input dataset offline with its corresponding output to finetune your program with your ML libraries.  It can be downloaded here:

Only one very large test dataset will be given for this problem online as input to your program for scoring.  This input data set will not be revealed to you.

Output for every classification is awarded points separately. The score for this problem will be the sum of points for each correct classification. To prevent naive solution credit (outputting all +1s, for example), points are awarded only after X correct classifications, where X is number of +1 answers or -1 answers (whichever is greater).

Your program should complete in minutes. Try to achieve as high an accuracy as possible with this constraint in mind.

Python URI

In Python, write a class or module with a bunch of functions for manipulating a URI. For this exercise, pretend that the urllib, urllib2, and urlparse modules don't exist. You can use other standard Python modules, such as re, for this. The focus of the class or module you write should be around usage on the web, so you'll want to have things that make it easier to update or append a querystring var, get the scheme for a URI, etc., and you may want to include ways to figure out the domain for a URL (,, etc.)

We're looking for correctness (you'll probably want to read the relevant RFCs; make sure you handle edge cases), and elegance of your API (does it let you do the things you commonly want to do with URIs in a really straightforward way?,) as well as coding style. If you don't know Python already, then this is also an exercise in learning new things quickly and well. Your code should be well-commented and documented and conform to the guidelines in the PEP 8 Style Guide for Python Code. Include some instructions and examples of usage in your documentation. You may also want to write unit tests.

Email Submission

Send your solutions to

Trend Analyzer

Note: This is an open-ended problem geared towards our Data Scientist role. Data scientists at Quora work with data at all levels of our product, allowing our team to make informed decisions about which features to pursue and which areas need improvement. Being an information repository in itself, efficient and accurate use of data is central to everything we do at Quora.

At Quora, we deal with a lot of data. As the number of users on our site grows, so does the volume of content and the number of interactions that go on daily.

We want to be able to determine various social and intellectual topics that are trending over time. However, we take privacy very seriously, and so we want to do this in a way that does not put any individual’s personal information at risk.

Your task is to design and implement a trend analysis engine. This engine should be able to take as input a table of raw data and present an interface for trend analysis. You are free to design this interface and choose the data analysis features that you wish to support. It is important that the engine only exposes to the user data that does not have any personal identifiers and protects sensitive attributes from being revealed. We provide some links to resources about how to think about this in the Resources section.

Your submission will be evaluated according to the following criteria:
  • Code Quality and Design (25%):
    • Is the code organized well and readable?
    • Were there good design decisions made and documented?
  • Privacy (25%):
    • Is there adequate preservation of privacy?
    • Is there a reasonable way to trade off between protecting personal information and getting sufficiently high quality of data for analysis?
  • Analytics (25%):
    • What analytical metrics do you support?
    • How do you detect or compute trends in the data?
  • Visualization (25%):
    • Are there compelling visual aids for presenting data?
    • Are there ways to interactively explore the data?

A Primer on Privacy Protection
Suppose there is a table with the following raw data:
Table 1: Raw data with personal information exposed

The first field (User ID), or the second and third field if used together (First and Last Name), will uniquely identify the person in this data set. If you are trying to determine aggregate big picture trends of Popularity from this data, there is no need to deal with data of this resolution and granularity.

Instead, if you knew the fields which are identifiers and quasi-identifiers, you could produce the following table after an anonymization process.
Table 2: Raw data after anonymization process

Grouped in the following way, you now have a table whose rows are placed into groups where you can no longer distinguish an individual record from the table. And you can still observe that asking of questions is yielding a high popularity score compared to answering questions, while following of topics has a mixed response.

A Primer on Time-Series Data
As another example, here is a different data set that contains time-series data.
Table 3: Raw data with no apparent patterns or trends

At first glance, it is difficult to detect any patterns or trends in the data. But if you strip out personal info and sort by Location and then by Time, a few things become clear.
Table 4: Raw data after sorting exposes different patterns and trends

First, there is a high correlation between the Location of the user and the Object being acted upon. In this case, it happens to be that the users seem to be discussing the public transport systems in their location (MBTA for Boston and Caltrain for Palo Alto). It is also clear that a question was asked in about the Caltrain, and after a short wait, there was a flurry of activity around that topic, with an answer followed by 2 votes. Whereas in Boston, it was just a spike in questions getting asked about the MBTA.

Input Format
Your engine will be tested by us on several data sets, with both synthetic and real data. Your engine should take as input a path to a comma-delimited text file. The first line will be the field names, and the second line will be the field type. The remaining rows will be the raw data.

The table below lists the field types we want you to support. If you have ideas for other types, feel free to include support for them and provide us an input file for us to evaluate them.
Table 5: Field types to be specified in the input data set

The above 2 sample data sets can be described by the following schema.

Example 1
User ID, First Name, Last Name, Action Type, Object, Popularity

Example 2
Time, User ID, Location, Action Type, Object

Submission Format
Your submission should be the URL to a standalone repository that we can clone, build and run with minimal effort.

You have a week for this problem and you may ask clarification questions by emailing We will try our best to respond to your questions but given the open-ended nature of this problem, you may also use your best judgement to make decisions and let us know how you dealt with these by documenting them.

We give links below to 3 papers that we believe are excellent starting points to think about preserving privacy in microdata. You may extend these ideas with other state-of-the-art in privacy and data research.

In your submission, you may also use any third party software libraries that are helpful to you, provided you cite them properly in your comments.

[1] L. Sweeney. k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 10 (5), 2002; 557-570

[2] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam. l-diversity: Privacy beyond k-anonymity. In Proc. 22nd Intnl. Conf. Data Engg. (ICDE), page 24, 2006.

[3] N. Li, T. Li, and S. Venkatasubramanian. t-Closeness: Privacy Beyond k-Anonymity and l-Diversity. IEEE International Conference on Data Engineering (ICDE), 2007

Sample Data
You may download test datasets as a tarball from:

The original data was sourced from, but we've touched it up a little and added the relevant field types to make them applicable to this problem. For each dataset in the zip file, we provide both the original raw data and the cleaned annotated version, as well as the script used to do the cleaning.

Email Submission

Send your solutions to