WebDB Workshop at SIGMOD 2015 in Melbourne
WebDB 2015 — 18th International Workshop on the Web and Databases
at SIGMOD 2015 in Melbourne, Australia, on 2015-05-31
Photo by DAVID ILIFF. License: CC-BY-SA 3.0

Transparency

WebDB has established itself as the premier workshop in the area of Web and Databases. In order to make its paper selection process fully transparent, we publish here the details of the submission process.

Submissions

Status of papers in Chair’s view. Note that the view shows only 7 accepted submissions instead of the 9 that were accepted. The missing 2 are the Chair’s submission, and one submission with which the Chair marked a conflict of interest.
Submissions were single-blind, i.e., reviewers can see the authors' names, but authors cannot see the reviewers' names. We decided against double-blind submissions, because workshop submissions are often accompanied by technical reports, Web pages, or corresponding full papers, which makes double-blind submission difficult.

Chairs were allowed to co-author papers. They cannot see the names of the reviewers for their papers. EasyChair does a very good job of dealing with conflicts of interest: Papers authored by a workshop Chair or by anyone with whom the Chair has a conflict of interest are hidden completely from the Chair's view (see figure on the right). This means that a Chair cannot see the scores of their paper, and thus cannot know whether their paper is accepted until they receive the notification email. We have made good experiences with this procedure in the past (AKBC 2012, AKBC 2013, AKBC 2014).

We had 36 submissions, of which 5 had to be rejected outright because they did not contain a full article. Thus, we count 31 valid submissions.

Reviewers

We had 41 reviewers. One reviewer outsourced his 3 papers to 3 sub-reviewers. These sub-reviewers were credited for their work in the proceedings of the workshop. 2 reviewers did not hand in their reviews in time. They will not be invited to the program committees of future workshops that we organize. 3 reviewers did not hand in their reviews at all. They were removed from the program committee, and will not be invited to the program committees of future workshops. The Chairs provided the missing reviews, subject to conflicts of interest.

Reviews

Every paper was reviewed by at least 3 reviewers. Reviewers could give the following scores:
strong accept3
accept2
weak accept1
borderline0
weak reject-1
reject-2
strong reject-3

There was a discussion period of 3 days, during which the Chairs went through the papers, and invited the reviewers to discuss the controversial papers.

In the end, the average scores of the papers were:

1.7
1.7 (1 Chair paper)
1.5
1.0
1.0
1.0(1 Chair paper)
1.0
0.7
0.7acceptance cut-off
0.3
0.3
0.3
0.3
0.3
0
<0(16 other papers)

We manually went through the borderline papers with score 0.3, and discussed whether to accept them. In the end, we decided to not override the reviewers’ scores, and rejected them.

The best paper award goes to the paper with the highest review score (excluding Chair papers).

Reviewer Awards

We manually assessed the reviews for quality (excluding reviews by the Chairs). We gave the following scores:
-1the review was late or not handed in at all
0the review exists
1the review is acceptable. It is not detailed, but we don’t have to be embarrassed sending it to the authors.
2the review is good. It summarizes the paper, details strengths and weaknesses, and points out ways for improvement. This is the type of reviews we expect.
3the review is excellent. It first summarizes the submission, it has an evaluation that is several paragraphs long (minus the summary), and it corrects at least one non-trivial point (e.g., a formula) and/or it mentions uncited related work other than reviewer’s own work.

The distribution of average scores per reviewer (including sub-reviewers) was as follows:

3
3
3
2,5
2,5
2,5
2,5
2,33
2,33
2
2
2
2
2
2
2
1,83
1,75
1,75
1,6666666667
1,6666666667
1,5
1,5
1,5
1,5
1,5
<1.516 other reviewers

After removing reviewers and sub-reviewers that only had to review 1 paper, we were left with 6 reviewers with a score >2.0. These received an outstanding reviewer award.

The award winners are: Benny Kimelfeld, Oktie Hassanzadeh, Nicoleta Preda, Felix Naumann, Danai Symeonidou, and Yael Amsterdamer.