Monday, March 28, 2016

A federation of reviewing communities? Area-wise analysis of the amount of discussion on IJCAI papers...

I always thought that one of the defining characteristics of AI conferences is the significant amount of inter-reviewer discussions on each paper.

In  planning, for example, it is not all that unheard of to have the discussions that are as long as the paper (yes we are thinking of you,  **@trik!).

Having also handled many AI&Web papers over years, I did have a hunch that the amount of discussion is not same across areas.

Now that we have access to the reviews for the whole 2300 papers of IJCAI, we decided to see how the various areas stack up.

We counted the number of words in all the discussion comments for each paper, and then averaged them across each area.  Here is what we got:




So, papers in Planning & Scheduling, Heuristic Search, KR, Constraints, and MAS areas get significantly more discussion, compared to Machine Learning, NLP and AI&Web.

The AIW statistic is somewhat understandable as the reviewers there are not just from AI community and may have different cultural norms.

 The Machine Learning statistic, however, is worrisome especially since a majority of submissions are in ML. Some of the ML colleagues I talked to say that things are not that different at other ML conferences (ICML, NIPS etc). Which makes me wonder, whether the much talked about NIPS experiment is a reflection of peer reviewing in general, or peer reviewing in ML...

In case you are wondering, here is the plot for the length of reviews  (again measured in terms of the number of words across all reviews). Interestingly, AIW submissions have longer reviews than ML and NLP!



So you know!

Rao
(with all real legwork from Lydia, aka IJCAI-16 data scientist...)


Tuesday, March 22, 2016

Burning Man, IJCAI style.. (or assembling a ~2000 strong program committee from scratch for a one time task..)

Once every year, in the Black rock desert of Nevada, an entire city springs up to support Burning Man, the storied desert festival.  As the festival site says, Burning Man is a vibrant participatory metropolis generated by its citizens (only to be erased and rebuilt again the next year).

Then there is the haunting Tibetan Buddhist ritual of Mandala formation,  where an exquisite sand painting is created painstakingly over days, only to be erased once it is done.

I  think about these as I look at the 2000 strong IJCAI Program Committee winding down their reviewing and making final decisions on the  2300 papers submitted to IJCAI.

They gathered out of thin air over the last six months. For the main track, it started with me recruiting 44 area chairs, they  recruiting 340 senior program committee members, who then recruited 1400 PC members. All the recruitment was done through good old fashioned (and ultra low-tech?)  email.

Last week, we provided a mechanism for the program committee members to nominate their colleagues for exemplary reviewing and discussion. To date, we have already received 130  nominations. It is so gratifying to read the justifications accompanying these nominations.  Makes me proud to be a member of the community that takes reviewing responsibilities so conscientiously!  Something all the more heartening, considering the fact that peer reviewing is  not something that is explicitly incentivized by the normal performance recognition mechanisms! (In the coming weeks, I hope to share more metrics about the reviewing process.)

I wish I knew all the program committee members, so I could thank them personally. But, AI is just too large and diverse for that! So, instead this is my public thanks.

May be there are better alternatives that provide for a persistent program committee (IROS seems to do this). But I wonder if  we will miss the  Burning Man/Mandala feel with them.

Rao

Monday, March 14, 2016

Spinning the Wheel of Fortune: Some quasi-humorous consequences of endless conference deadlines..

Not too long ago, there used to be a couple of real conference deadlines per year in AI.  You work on your papers through the year, submit them,  wait for their reviews, and  if those don't work out, revise and resubmit for the next cycle that is a year or at least six months away.

These days, clearly, things have changed--especially in AI--where there are a whole variety of conferences with endless and sometimes overlapping deadlines.

There are many things that can be said about this brave new world, but I want to use this post to share a couple of quasi-humorous consequences..

[Withdrawal after author response]: The  author response period for IJCAI-16 ended this Saturday, and we have been getting a  steady trickle of mails from authors to help them withdraw their paper. Interestingly most of them seem to have suddenly realized a "lethal error" [sic] in their experiments and thus want to withdraw the paper, as urgently as possible! The fact that a new set of conference deadlines are around the corner (e.g. ACL on 3/18 ) is just a mere coincidence.

Interestingly, submitting to another conference, in the middle of the review period of the current conference (perhaps because of the poor reviews) is considered a violation; but if the authors just send a mail requesting withdrawal of their paper, it magically becomes "legal" (but, does it become ethical?..).


[Reviews ready before papers!]: On the night we sent the assignments to the 2000 program committee members, I knew I still had to fine tune a couple of features of the IJCAI review form. However as it was an exhausting weekend, I thought I will do that in a couple of days and went to sleep. After all, reviewers will need time to read the papers right? And all I need to do is to finalize the review form before they start submitting reviews. Little did I know!

By the time I woke up from a brief 4-hour nap, my mail box already had some 30 messages from Easychair about newly submitted reviews! While scrambling to finalize the review form pronto,  I was curious as to how this superhuman feat of near instantaneous reviewing was possible. Turned out that several of those papers were being re-submitted from a couple of conferences whose cycles just got over, and the papers turned out to have overlapping reviewers! So reviewers can play the game as well as the authors: send the same papers and get the same reviews!  ;-)

To some extent the conferences are all complicit in this, given the excessive interest in talking about the number of submissions a conference receives. After all, the bigger the denominator, the lower the acceptance rate, and thus higher the perceived selectivity. Apparently ivy leagues are not the only ones running this racket..

We collectively bemoan the quality of submissions and reviews. May be we need to put the money where our mouths are, and work on designing mechanisms that don't incentivize the wrong things.. For starters, I hope we start caring more about the impact of a conference (measured by citations, for example), than its "selectivity".

Enough chit chat--I hear another conference deadline approaching... time to give the wheel another whirl..

Rao

Saturday, March 5, 2016

Papers "written by" the program chair and other unexpected consequences of increased diversity in conference participation..

One of the beautiful things about the conferences in general, and AI conferences in particular, is how cosmopolitan they have become.

Gone are the days when all the participants and submissions were from just a couple of "usual suspects"  countries. It should be no surprise to anyone that neither US nor Europe are the top regions in terms of submissions to IJCAI. It is also worth noting that the allure of AI has broadened significantly and there are submissions from many authors who are first time IJCAI submitters. Similar increased internationalization has also occurred in the program committee.

All of this internationalization of IJCAI is truly a cause to celebrate. Knuth says that he wants the names of all the authors cited in his Art of Programming books to be in their native script . I don't know if I can ever get mentioned in the AoP books, but certainly look forward to seeing కంభంపాటి సుబ్బారావు in technical forums ;-). Heck, being an AI aficionado, I am sure of the day when people can write in the language they feel most fluent  in, and the AI Babel Fish will just render it flawlessly into the reader's preferred language.

In the mean time, however, the increased diversity and internationalization does bring up some challenges that we don't always anticipate. Here are a couple of interesting ones:


  • There are several papers currently going through IJCAI review that have "Subbarao Kambhampati" as the sole author of the paper. Apparently some authors thought that the way to make their submissions compliant with double-blind reviewing requirement is to just put me as the author. (We decided to leave them in the reviewing pool as they do follow the spirit--if not the letter--of the double blind review process.  Of course it did lead to a couple of irate mails from some PC members asking (a) why am I submitting papers to the conference I am the program chair for and (b) why am I not even following the rules ;-)
  • In some cases, the program committee members have developed surprising interpretations of "conflict of interest" for double blind reviewing. Basically, to ensure that they may not be reviewing a paper from an author that they have CoI with, they decided to (a) guess the identity of the authors (b) send mails to them to see if they are the authors and (c) declare CoI when they find that in fact the author is someone they guessed ;-). This would have been great entertainment,  if I didn't have to go find new reviewers for those papers :-(
The lesson to be  learned, I guess, is that we can't both hope for increased diversity and internationalization in participation, and at the same time assume that everyone has the same common understanding of the process that the old boys do. Things need to be put in writing, however obvious they might be for some (large) subset of the participants. 


Rao