Reviewing Criteria: Research Track

Below we have provided some guidelines to reviewers on how to write reviews, both the content of reviews and also how the numerical scoring system works. Many of the suggestions below have been liberally borrowed from other conferences - so thanks to the many folks who have contributed to writing these types of "guidance" pages in the past.

Writing Reviews: Content

For each paper you will provide written comments under each of the headings below. Your review should address both the strengths and weaknesses of the paper - identify the areas where you believe the paper is particularly strong and particularly weak - this will be very valuable to the PC Chairs and the SPC.

Novelty: This is arguably the single most important criterion for selecting papers for the conference. Reviewers should reward papers that propose genuinely new ideas or novel adaptations/applications of existing methods. It is not the duty of the reviewer to infer what aspects of a paper are novel - the authors should explicitly point out how their work is novel relative to prior work. Assessment of novelty is obviously a subjective process, but as a reviewer you should try to assess whether the ideas are truly new, or area novel combinations or adaptations or extensions of existing ideas, or minor extensions of existing ideas, and so on.

Technical Quality: Are the results sound? Are there obvious flaws in the conceptual approach? Did the authors ignore (or appear unaware of) highly relevant prior work? Are the experiments well thought out and convincing? Are there obvious experiments that were not carried out? Will it be possible for later researchers to replicate these results? Are the data sets and/or code publicly available? Did the authors discuss sensitivity of their algorithm/method/procedure to parameter settings? Did the authors clearly assess both the strengths and weaknesses of their approach?

Potential Impact and Significance: Is this really a significant advance in the state of the art? Is this a paper that people are likely to read and cite in later years? Does the paper address an important problem (e.g., one that people outside KDD are aware of) or just a problem that only a few data mining researchers are interested in and that wont have any lasting impact? Is this a paper that researchers and/or practitioners might find useful 5 or 10 years from now? Is this work that can be built on by other researchers?

Clarity of Writing: Please make full use of the range of scores for this category so that we can identify poorly-written papers early in the process. Is the paper clearly written? Is there a good use of examples and figures? Is it well organized? Are there problems with style and grammar? Are there issues with typos, formatting, references, etc? It is the responsibility of the authors of a paper to write clearly, rather than it being the duty of the reviewers to try to extract information from a poorly written paper. Do not assume that the authors will fix problems before a final camera-ready version is published - unlike journal publications, there will not be time to carefully check that accepted papers are properly written. Think of future readers trying to extract information from the paper - it may be better to advise the authors to revise a paper and submit to a later conference, than to accept and publish a poorly-written version.

Additional Points (optional): this is an optional section on the review form can be used to add additional comments for the authors that dont naturally fit into any of the areas above.

Comments that are only for the SPC and PC (optional): again this is another optional section. If there are any comments that you would like to communicate to the SPC and PC chairs, but that wish not to be seen by the authors, they can go in this section.

General Advice on Review Writing: please be as precise as you can in your comments to the authors and avoid vague statements. Your criticism should be constructive where possible - if you are giving a low score to a paper then try to be clear in explaining to the authors the types of actions they could take to improve their paper in the future. For example, if you think that this work is incremental relative to prior work, please cite the specific relevant prior work you are referring to. Or if you think the experiments are not very realistic or useful, let the author(s) know what they could do to improve them (e.g., more realistic data sets, larger data sets, sensitivity analyses, etc).

Writing Reviews: Numerical Scoring

For KDD-2012 we are using a 7-point scoring system. We strongly encourage you to use the full range of scores, if appropriate for your papers. Try not to put all of your papers in a narrow range of scores in the middle of the scale, e.g., 3s, 4s, and 5s. Dont be afraid to assign 1s/2s, or 6s/7s, if papers deserve them. If you are new to the KDD conference (or have not attended for a number of years) you may find it useful to take a look at online proceedings from recent KDD conferences to help calibrate your scores. The scoring system is as follows:

  • 7: An excellent paper, a very strong accept. I will fight for acceptance, this is potentially best-paper material.
  • 6: A very good paper, should be accepted. I vote and argue for acceptance, clearly belongs in the conference.
  • 5: A good paper overall, accept if possible. I vote for acceptance, although would not be upset if it were rejected because of the low acceptance rate.
  • 4: Decent paper, but may be below KDD threshold I tend to vote for rejecting it, but could be persuaded otherwise.
  • 3: An OK paper, but not good enough. A rejection. I vote for rejecting it, although would not be upset if it were accepted.
  • 2: A clear rejection. I vote and argue for rejection. Clearly below the standards for the conference.
  • 1: A strong rejection. I'm surprised it was submitted to this conference. I will actively fight for rejection.