Tuesday, June 7, 2016

Announcing IJCAI-2016 Outstanding Program Committee Members: Towards making good reviewing an attainable goal..


I have always felt that the  "outstanding program committee member" awards or somewhat quixotic, in as much as we often tend to  pick a handful of people for the recognition. As the program committees continue to grow in size (IJCAI 2016 had over 2000 people in the program committee), and we are faced with continual deadlines and hyper-paper-productivity, it seems  counterproductive to recognize just a vanishing percentage of people for reviewing, as it might inadvertently send the message that good reviewing is a rather unattainable "get a Nobel Prize" like goal.

So, for this IJCAI, we requested the program committee members to nominate their peers who, in their eyes, have done an outstanding job of  reviewing/handling a paper. We explicitly told them that our goal was to recognize  a significant percentage of the program committee. We made a simple google forms interface through which PC members could submit nominations and justifications.

As soon as we made the link public, we had an onslaught of nominations!  Apparently there was a pent-up demand for such a mechanism..

Given the size and diversity of the program committee, we went through all the nominations and justifications manually to make sure that the system is not being gamed.

Sure enough, we did indeed catch a few instances of unsubstantive/nonsequitor nominations, such as

Prof. X has always submitted to IJCAI and I nominate X for outstanding reviewing

or

Dr. Y asked me to review some papers, and I nominate Y for outstanding reviewing

or in one particularly egregious case, a program committee member who was ejected from the IJCAI program committee for complete malfeasance, and had the audacity to  nominate a bunch of his/her colleagues..

But the overwhelming majority of nominations were  substantive and heartfelt. It was quite a gratifying experience to go through them!

Yesterday, we updated the IJCAI program committee listing on the website by putting a resplendent blue ribbon  (Nominated for good review) in front of each program committee member for each nomination he/she received.

You can now look at these  beribboned program committee members--nearly 200 of them (or about 10% of the program committee)--at the webpage here:

http://ijcai-16.org/index.php/welcome/view/program_committee

If any of these are your colleagues, please take time to congratulate them. They did what they did despite knowing that there are few tangible rewards and incentives for good reviewing.

If you are an author who submitted a paper to IJCAI and got thoughtful reviews, you know that your experience was not by any means isolated!

For myself, as a graying member of the AI community, I am freshly tickled pink at how "good reviewing" is still a rule rather than exception for us!  Thank you, IJCAI-16 program committee!


--Rao

Wednesday, April 13, 2016

The (increasing?) practice of expanding co-author list after paper acceptance...

Something funny happened on Easychair the day after acceptance decisions were sent.

We noticed that a lot of people have logged in and added additional co-authors to their--now accepted--papers. In many cases 2, 3 and even 4 authors were being added!

Since we had kept backup snapshots of Easychair every week during the review period, we knew the original author lists for all the papers (which is how they are being listed on the accepted paper list ).

I understand that sometimes we may get non-trivial help from a colleague after paper submission that qualitatively changes the camera-ready version, thus legitimately necessitating author list expansion. We have done this ourselves a couple of times in our group.

Still,  it is a bit surprising that  50+  papers suddenly found this need. Even more surprising that many of them  found that they had additional help from not just one but "multiple" forgotten co-authors. Clearly the adage--that success has many parents, while failure is an orphan--seems to be playing out in spades here ;-)

So we decided to look into this phenomenon more closely by requesting authors to provide a justification as to why the new co-authors need to be added.  We are now getting mails from even more people, with requests to add 1, 2 or 3 co-authors.

The justifications range from the quite reasonable ones, such as

"I forgot to give credit to an undergraduate intern who helped with the work"

"X helped me prove an additional theorem" 

to the  somewhat questionable

"X helped with the rebuttal"

"X financially supported this work"

"X, Y and Z just got permission from their companies to join as authors"

"X wants to come to the conference, and thus would like their name on the paper" 

"because X is a respected researcher in our university"

to the utterly  inexplicable

"I have forgotten about listing any of my co-authors".

"We didn't put some of the authors when submitting for blind review"

"When trying to beat the submission deadline, we wanted to save time by listing just one author"


It seems to me that the situation has come a long way from adding authors when legitimately needed to deliberately keeping original author lists incomplete.

There may also be some implicit cultural norms at work here, in as much as over 90% of the papers expanding author list after acceptance are from a specific region.

Large scale author list modification post-acceptance does pose several quandaries for the conference. In addition to the obvious intangible long-term ones such as  cheapening co-authorship, there is the more immediate and tangible  one: We rely on author lists to ensure that conflicts of interest situations are avoided. It becomes very hard to do that if the author lists are fluid and subject to massive changes after acceptance.

In other areas, such as Signal Processing and Computer Architecture,  changes to  the author list post-acceptance is not allowed. Period.  (Granted, some of these conferences also have limitations on the number of submissions any single individual can be a part of, and fluid author lists defeat the limitation by allowing a backdoor for hyper-prolificity ).

Even if we don't want to be quite so strict, it does make sense to discourage the practice of keeping initial author list  deliberately incomplete. Perhaps AI conferences should emphasize the obvious at submission time: that author lists are expected to be complete  at the time of submission.

Rao



Saturday, April 9, 2016

Hyper Paper Productivity (or the most prolific authors of IJCAI submissions..)

The other day we were compiling statistics on IJCAI submissions, and I was struck by some authors who seemed to be a co-author on a rather unbelievably  large number of submissions.

At first I thought that the inflated numbers might be because the analysis is mistakenly merging multiple different people who happen to have the same name. 

So we redid the statistics, this time taking the author contact email addresses to resolve the identity. Surprisingly, the statistics on the hyper-prolific authors remained unchanged.

Being of the "basically lazy" kind, this level of productivity was beyond my meager ken. So I asked some of my less lazy colleagues to take a guess on how many submissions the most prolific co-authors had.

Unfortunately, my colleagues seemed to be just as behind times as I am. None of them could hazard a guess that is even within 50% of the true number. This even after I gave them a couple of chances to improve their guesses.

Clearly I need to associate with more productive colleagues! Be that as it may,  let's come back to these hyper-productive authors. 

The number of papers co-authored by the most prolific co-author in the IJCAI  turns out to be.. DRUMROLL  please... 

32 


Yes, you heard me right. *thirty two*.   And no, this is not a single outlier. Next in line were authors with 

23

21 

co-authored submissions respectively.   Without giving anything away, let me say that all these prolific authors are writing papers in  applied ML--an area that also has the maximum number of submissions to IJCAI). 

When I shared these numbers with my (admittedly lazy) colleagues, they countered that the number of papers accepted from these would surely be close to zero--as after all who other than a Map-Reduce program can churn out so many papers?

Well, little do these colleagues know! These papers are not the bottom of the barrel.  About the only damning thing that can be said is that an overwhelming majority were basically borderline. 

Which made me wonder--what if these prolific authors decided to focus their energies on fewer papers with potentially higher quality. (Reminded me of a colleague's backhanded compliment of one of his PhD students: "Having him is like having 10 mediocre students!")


Some colleagues pointed out that many IEEE conferences put an upper-bound on the number of submissions from any one author, and wondered if IJCAI should consider it too. Something to think about from a mechanism design point of view.

But clearly, there seem to be other pressures pushing people towards hyper-paper-productivity even at the expense of spreading themselves too thin. Here is hoping that some resistance will be developed on this front.

Even in these days of  inflated resume expectations, it is perhaps more important to be known for a  contribution rather than a count; for a significant idea,  than for breaking a guinness book record on the number of papers, or aiming for  a Kim Kardashian-esque "famous for being famous" distinction. 

As always, your comments are welcome!

Rao

ps: There are some obvious implications of hyper paper productivity to the "spinning the wheel of fortune" phenomenon I talked about earlier; but I will leave them to your imagination

Disclaimer: Author anonymity was never compromised during this analysis. There is no  implied correlation--positive or negative--between the number of submissions and the number of acceptances. 


Addendum; A colleague pointed out that ICSE, one of the main conferences of Software Engineering, instituted a maximum 3 papers per author policy starting 2017. There seems to be very interesting discussion about the policy in that community

Tuesday, April 5, 2016

What's hot in ijcAI Redux

Here is the word cloud from the titles of the papers accepted  to the  IJCAI-16 technical program.

A big thanks to the IJCAI-16  Program Committee for a gargantuan job done well!

Rao


What's hot in ijcAI

A sincere "Thank you" to the Program Committee..

[The following was sent to the IJCAI program committee this morning]

Dear IJCAI-16 Program Committee members:

Last night, I sent out decision notifications to the authors of the
2,294 papers submitted to IJCAI this year.

For the past three months, I have had a birds eye view of the IJCAI
program commitee in action. For the last ten of those days, I had the
additional privilege of poring over the reviews and discussions of a
broad swath of the IJCAI papers.

IJCAI reviewing is by no means perfect; no conference on a subject as 

diverse as AI, and a 2000-strong program committee, can possibly hope to
be. So, part of the experience, no doubt, was like watching the
proverbial sausage getting made.

But then there is the other part. The heady and gratifying experience
of seeing the many amazingly conscientious reviewers hard at work.  I
have seen papers with close to thirty comments, and reviewers having
 
highly intellectual discussions stretching over multiple days about
the subject matter. I have seen quite possibly the longest meta-review
ever written anywhere in science. I have seen SPC members and Area
Chairs asking me to wait just a little more, so they can try and find
additional reviewers in the last minute to make sure a paper gets a
fair evaluation.

Above all, I have seen many many papers getting a quality of reviewing
that other fields can only dream about.


The whole experience made me remember afresh why I agreed to take on this
seemingly masochistic responsibility in the first place.

Thank you for all your help!  Let 
me end with this gem from one of the meta-reviews:

"The positive reviewers fought for the papers, and the negative
finally cheered up in their opinion. At the end, the paper is not a
homerun, but after all IJCAI-16 is a conference where people like to
discuss the results."



Warm Regards
Rao
---
Subbarao Kambhampati
Program Chair, IJCAI-2016
http://rakaposhi.eas.asu.edu

Monday, March 28, 2016

A federation of reviewing communities? Area-wise analysis of the amount of discussion on IJCAI papers...

I always thought that one of the defining characteristics of AI conferences is the significant amount of inter-reviewer discussions on each paper.

In  planning, for example, it is not all that unheard of to have the discussions that are as long as the paper (yes we are thinking of you,  **@trik!).

Having also handled many AI&Web papers over years, I did have a hunch that the amount of discussion is not same across areas.

Now that we have access to the reviews for the whole 2300 papers of IJCAI, we decided to see how the various areas stack up.

We counted the number of words in all the discussion comments for each paper, and then averaged them across each area.  Here is what we got:




So, papers in Planning & Scheduling, Heuristic Search, KR, Constraints, and MAS areas get significantly more discussion, compared to Machine Learning, NLP and AI&Web.

The AIW statistic is somewhat understandable as the reviewers there are not just from AI community and may have different cultural norms.

 The Machine Learning statistic, however, is worrisome especially since a majority of submissions are in ML. Some of the ML colleagues I talked to say that things are not that different at other ML conferences (ICML, NIPS etc). Which makes me wonder, whether the much talked about NIPS experiment is a reflection of peer reviewing in general, or peer reviewing in ML...

In case you are wondering, here is the plot for the length of reviews  (again measured in terms of the number of words across all reviews). Interestingly, AIW submissions have longer reviews than ML and NLP!



So you know!

Rao
(with all real legwork from Lydia, aka IJCAI-16 data scientist...)


Tuesday, March 22, 2016

Burning Man, IJCAI style.. (or assembling a ~2000 strong program committee from scratch for a one time task..)

Once every year, in the Black rock desert of Nevada, an entire city springs up to support Burning Man, the storied desert festival.  As the festival site says, Burning Man is a vibrant participatory metropolis generated by its citizens (only to be erased and rebuilt again the next year).

Then there is the haunting Tibetan Buddhist ritual of Mandala formation,  where an exquisite sand painting is created painstakingly over days, only to be erased once it is done.

I  think about these as I look at the 2000 strong IJCAI Program Committee winding down their reviewing and making final decisions on the  2300 papers submitted to IJCAI.

They gathered out of thin air over the last six months. For the main track, it started with me recruiting 44 area chairs, they  recruiting 340 senior program committee members, who then recruited 1400 PC members. All the recruitment was done through good old fashioned (and ultra low-tech?)  email.

Last week, we provided a mechanism for the program committee members to nominate their colleagues for exemplary reviewing and discussion. To date, we have already received 130  nominations. It is so gratifying to read the justifications accompanying these nominations.  Makes me proud to be a member of the community that takes reviewing responsibilities so conscientiously!  Something all the more heartening, considering the fact that peer reviewing is  not something that is explicitly incentivized by the normal performance recognition mechanisms! (In the coming weeks, I hope to share more metrics about the reviewing process.)

I wish I knew all the program committee members, so I could thank them personally. But, AI is just too large and diverse for that! So, instead this is my public thanks.

May be there are better alternatives that provide for a persistent program committee (IROS seems to do this). But I wonder if  we will miss the  Burning Man/Mandala feel with them.

Rao

Monday, March 14, 2016

Spinning the Wheel of Fortune: Some quasi-humorous consequences of endless conference deadlines..

Not too long ago, there used to be a couple of real conference deadlines per year in AI.  You work on your papers through the year, submit them,  wait for their reviews, and  if those don't work out, revise and resubmit for the next cycle that is a year or at least six months away.

These days, clearly, things have changed--especially in AI--where there are a whole variety of conferences with endless and sometimes overlapping deadlines.

There are many things that can be said about this brave new world, but I want to use this post to share a couple of quasi-humorous consequences..

[Withdrawal after author response]: The  author response period for IJCAI-16 ended this Saturday, and we have been getting a  steady trickle of mails from authors to help them withdraw their paper. Interestingly most of them seem to have suddenly realized a "lethal error" [sic] in their experiments and thus want to withdraw the paper, as urgently as possible! The fact that a new set of conference deadlines are around the corner (e.g. ACL on 3/18 ) is just a mere coincidence.

Interestingly, submitting to another conference, in the middle of the review period of the current conference (perhaps because of the poor reviews) is considered a violation; but if the authors just send a mail requesting withdrawal of their paper, it magically becomes "legal" (but, does it become ethical?..).


[Reviews ready before papers!]: On the night we sent the assignments to the 2000 program committee members, I knew I still had to fine tune a couple of features of the IJCAI review form. However as it was an exhausting weekend, I thought I will do that in a couple of days and went to sleep. After all, reviewers will need time to read the papers right? And all I need to do is to finalize the review form before they start submitting reviews. Little did I know!

By the time I woke up from a brief 4-hour nap, my mail box already had some 30 messages from Easychair about newly submitted reviews! While scrambling to finalize the review form pronto,  I was curious as to how this superhuman feat of near instantaneous reviewing was possible. Turned out that several of those papers were being re-submitted from a couple of conferences whose cycles just got over, and the papers turned out to have overlapping reviewers! So reviewers can play the game as well as the authors: send the same papers and get the same reviews!  ;-)

To some extent the conferences are all complicit in this, given the excessive interest in talking about the number of submissions a conference receives. After all, the bigger the denominator, the lower the acceptance rate, and thus higher the perceived selectivity. Apparently ivy leagues are not the only ones running this racket..

We collectively bemoan the quality of submissions and reviews. May be we need to put the money where our mouths are, and work on designing mechanisms that don't incentivize the wrong things.. For starters, I hope we start caring more about the impact of a conference (measured by citations, for example), than its "selectivity".

Enough chit chat--I hear another conference deadline approaching... time to give the wheel another whirl..

Rao

Saturday, March 5, 2016

Papers "written by" the program chair and other unexpected consequences of increased diversity in conference participation..

One of the beautiful things about the conferences in general, and AI conferences in particular, is how cosmopolitan they have become.

Gone are the days when all the participants and submissions were from just a couple of "usual suspects"  countries. It should be no surprise to anyone that neither US nor Europe are the top regions in terms of submissions to IJCAI. It is also worth noting that the allure of AI has broadened significantly and there are submissions from many authors who are first time IJCAI submitters. Similar increased internationalization has also occurred in the program committee.

All of this internationalization of IJCAI is truly a cause to celebrate. Knuth says that he wants the names of all the authors cited in his Art of Programming books to be in their native script . I don't know if I can ever get mentioned in the AoP books, but certainly look forward to seeing కంభంపాటి సుబ్బారావు in technical forums ;-). Heck, being an AI aficionado, I am sure of the day when people can write in the language they feel most fluent  in, and the AI Babel Fish will just render it flawlessly into the reader's preferred language.

In the mean time, however, the increased diversity and internationalization does bring up some challenges that we don't always anticipate. Here are a couple of interesting ones:


  • There are several papers currently going through IJCAI review that have "Subbarao Kambhampati" as the sole author of the paper. Apparently some authors thought that the way to make their submissions compliant with double-blind reviewing requirement is to just put me as the author. (We decided to leave them in the reviewing pool as they do follow the spirit--if not the letter--of the double blind review process.  Of course it did lead to a couple of irate mails from some PC members asking (a) why am I submitting papers to the conference I am the program chair for and (b) why am I not even following the rules ;-)
  • In some cases, the program committee members have developed surprising interpretations of "conflict of interest" for double blind reviewing. Basically, to ensure that they may not be reviewing a paper from an author that they have CoI with, they decided to (a) guess the identity of the authors (b) send mails to them to see if they are the authors and (c) declare CoI when they find that in fact the author is someone they guessed ;-). This would have been great entertainment,  if I didn't have to go find new reviewers for those papers :-(
The lesson to be  learned, I guess, is that we can't both hope for increased diversity and internationalization in participation, and at the same time assume that everyone has the same common understanding of the process that the old boys do. Things need to be put in writing, however obvious they might be for some (large) subset of the participants. 


Rao

Thursday, February 18, 2016

So what papers are IJCAI-16 PC members most interested in reading (based on the bid popularity)?

One of the unsaid things about a large conference like IJCAI is that the quality of reviews a paper gets is critically correlated with the number of bids that the paper gets.

When a paper doesn't get enough bids, it needs to be matched manually to PC members--something that is very much error-prone when we are talking about 2000 papers and 1700+ reviewers (and the ever condensed reviewing time).

Having just completed making some 10,000+ reviewer assignments using 53,000 reviewer bids, I am struck not just by the well known long tail phenomena in the number of bids papers get, but also how many papers get almost "shutout" (get zero bids).

So we thought it would be fund to do an analysis on what papers are getting a lot of bids (based on a word cloud analysis of the paper titles).

52892 bids were made by 1723 program committee members over 2000 papers in the main track(or an average of 25 bids/paper and 30 bids/PC member). Here is how they were distributed




So, we naturally wanted to find out which papers are getting more vs. fewer bids. Numbering the bins 0 (for <=3 combined bids), 1 (for 4-10 combined bids) etc, we made word clouds for papers in each of the bins (based on the titles). Thus the cloud (helpfully shaped) 8 has the words occurring in the papers getting  >70 bids ;-)

Here then are the bins from the "least bids (0)" to "most bids (8)"




So there you go... I have my own interpretation of this data, but I would rather hear your interpretations ;-)

Rao
(with all the help from Lydia Manikonda-- IJCAI-16 Data Scientist)


Tuesday, February 9, 2016

What's hot in AI? (from IJCAI-16 Main Track Submission Ttitles)


Here is the word cloud of IJCAI-16 paper titles.




The above is with tf/idf so it throws out non-discriminative words (like "is", "of" and "deep" :-)


Here is another tf/idf normalized view...


                                                                                            (Clouds Credit: Lydia Manikonda)

So apparently the IJCAI does remain the conference for the Whole AI Enterprise ;-)


Finally, here is the normal frequency word cloud...




Rao
Feb 9th

Sunday, February 7, 2016

My earliest IJCAI submissions... ;-)

As we are caught-up  in the  IJCAI-16 paper assignment phase, I recalled, with some fondness, my earliest IJCAI papers.

My very first submission was to IJCAI 1985. That paper was unfortunately rejected; I managed to find those reviews ;-)  It did go on to form the basis for my MS thesis and is currently my most cited paper...

My next submission was to IJCAI 1989 and it fared much better--getting accepted to the conference. I remember going to the huge conference at Detroit. I also remember that Tom Dean, who was tasked with giving a talk on the last day of the conference about what is new in planning and reasoning at IJCAI-89 asked me for a couple of my plastic transparencies, and covered them in his talk! Heady feeling.. ;-)  That paper went onto form the basis for my PhD thesis. I managed to find its reviews too .

I don't obviously have any submissions this time--but wish all the rest of you with submissions great reviews!

Rao
2/7/16