Tuesday, June 7, 2016

Announcing IJCAI-2016 Outstanding Program Committee Members: Towards making good reviewing an attainable goal..


I have always felt that the  "outstanding program committee member" awards or somewhat quixotic, in as much as we often tend to  pick a handful of people for the recognition. As the program committees continue to grow in size (IJCAI 2016 had over 2000 people in the program committee), and we are faced with continual deadlines and hyper-paper-productivity, it seems  counterproductive to recognize just a vanishing percentage of people for reviewing, as it might inadvertently send the message that good reviewing is a rather unattainable "get a Nobel Prize" like goal.

So, for this IJCAI, we requested the program committee members to nominate their peers who, in their eyes, have done an outstanding job of  reviewing/handling a paper. We explicitly told them that our goal was to recognize  a significant percentage of the program committee. We made a simple google forms interface through which PC members could submit nominations and justifications.

As soon as we made the link public, we had an onslaught of nominations!  Apparently there was a pent-up demand for such a mechanism..

Given the size and diversity of the program committee, we went through all the nominations and justifications manually to make sure that the system is not being gamed.

Sure enough, we did indeed catch a few instances of unsubstantive/nonsequitor nominations, such as

Prof. X has always submitted to IJCAI and I nominate X for outstanding reviewing

or

Dr. Y asked me to review some papers, and I nominate Y for outstanding reviewing

or in one particularly egregious case, a program committee member who was ejected from the IJCAI program committee for complete malfeasance, and had the audacity to  nominate a bunch of his/her colleagues..

But the overwhelming majority of nominations were  substantive and heartfelt. It was quite a gratifying experience to go through them!

Yesterday, we updated the IJCAI program committee listing on the website by putting a resplendent blue ribbon  (Nominated for good review) in front of each program committee member for each nomination he/she received.

You can now look at these  beribboned program committee members--nearly 200 of them (or about 10% of the program committee)--at the webpage here:

http://ijcai-16.org/index.php/welcome/view/program_committee

If any of these are your colleagues, please take time to congratulate them. They did what they did despite knowing that there are few tangible rewards and incentives for good reviewing.

If you are an author who submitted a paper to IJCAI and got thoughtful reviews, you know that your experience was not by any means isolated!

For myself, as a graying member of the AI community, I am freshly tickled pink at how "good reviewing" is still a rule rather than exception for us!  Thank you, IJCAI-16 program committee!


--Rao

Wednesday, April 13, 2016

The (increasing?) practice of expanding co-author list after paper acceptance...

Something funny happened on Easychair the day after acceptance decisions were sent.

We noticed that a lot of people have logged in and added additional co-authors to their--now accepted--papers. In many cases 2, 3 and even 4 authors were being added!

Since we had kept backup snapshots of Easychair every week during the review period, we knew the original author lists for all the papers (which is how they are being listed on the accepted paper list ).

I understand that sometimes we may get non-trivial help from a colleague after paper submission that qualitatively changes the camera-ready version, thus legitimately necessitating author list expansion. We have done this ourselves a couple of times in our group.

Still,  it is a bit surprising that  50+  papers suddenly found this need. Even more surprising that many of them  found that they had additional help from not just one but "multiple" forgotten co-authors. Clearly the adage--that success has many parents, while failure is an orphan--seems to be playing out in spades here ;-)

So we decided to look into this phenomenon more closely by requesting authors to provide a justification as to why the new co-authors need to be added.  We are now getting mails from even more people, with requests to add 1, 2 or 3 co-authors.

The justifications range from the quite reasonable ones, such as

"I forgot to give credit to an undergraduate intern who helped with the work"

"X helped me prove an additional theorem" 

to the  somewhat questionable

"X helped with the rebuttal"

"X financially supported this work"

"X, Y and Z just got permission from their companies to join as authors"

"X wants to come to the conference, and thus would like their name on the paper" 

"because X is a respected researcher in our university"

to the utterly  inexplicable

"I have forgotten about listing any of my co-authors".

"We didn't put some of the authors when submitting for blind review"

"When trying to beat the submission deadline, we wanted to save time by listing just one author"


It seems to me that the situation has come a long way from adding authors when legitimately needed to deliberately keeping original author lists incomplete.

There may also be some implicit cultural norms at work here, in as much as over 90% of the papers expanding author list after acceptance are from a specific region.

Large scale author list modification post-acceptance does pose several quandaries for the conference. In addition to the obvious intangible long-term ones such as  cheapening co-authorship, there is the more immediate and tangible  one: We rely on author lists to ensure that conflicts of interest situations are avoided. It becomes very hard to do that if the author lists are fluid and subject to massive changes after acceptance.

In other areas, such as Signal Processing and Computer Architecture,  changes to  the author list post-acceptance is not allowed. Period.  (Granted, some of these conferences also have limitations on the number of submissions any single individual can be a part of, and fluid author lists defeat the limitation by allowing a backdoor for hyper-prolificity ).

Even if we don't want to be quite so strict, it does make sense to discourage the practice of keeping initial author list  deliberately incomplete. Perhaps AI conferences should emphasize the obvious at submission time: that author lists are expected to be complete  at the time of submission.

Rao



Saturday, April 9, 2016

Hyper Paper Productivity (or the most prolific authors of IJCAI submissions..)

The other day we were compiling statistics on IJCAI submissions, and I was struck by some authors who seemed to be a co-author on a rather unbelievably  large number of submissions.

At first I thought that the inflated numbers might be because the analysis is mistakenly merging multiple different people who happen to have the same name. 

So we redid the statistics, this time taking the author contact email addresses to resolve the identity. Surprisingly, the statistics on the hyper-prolific authors remained unchanged.

Being of the "basically lazy" kind, this level of productivity was beyond my meager ken. So I asked some of my less lazy colleagues to take a guess on how many submissions the most prolific co-authors had.

Unfortunately, my colleagues seemed to be just as behind times as I am. None of them could hazard a guess that is even within 50% of the true number. This even after I gave them a couple of chances to improve their guesses.

Clearly I need to associate with more productive colleagues! Be that as it may,  let's come back to these hyper-productive authors. 

The number of papers co-authored by the most prolific co-author in the IJCAI  turns out to be.. DRUMROLL  please... 

32 


Yes, you heard me right. *thirty two*.   And no, this is not a single outlier. Next in line were authors with 

23

21 

co-authored submissions respectively.   Without giving anything away, let me say that all these prolific authors are writing papers in  applied ML--an area that also has the maximum number of submissions to IJCAI). 

When I shared these numbers with my (admittedly lazy) colleagues, they countered that the number of papers accepted from these would surely be close to zero--as after all who other than a Map-Reduce program can churn out so many papers?

Well, little do these colleagues know! These papers are not the bottom of the barrel.  About the only damning thing that can be said is that an overwhelming majority were basically borderline. 

Which made me wonder--what if these prolific authors decided to focus their energies on fewer papers with potentially higher quality. (Reminded me of a colleague's backhanded compliment of one of his PhD students: "Having him is like having 10 mediocre students!")


Some colleagues pointed out that many IEEE conferences put an upper-bound on the number of submissions from any one author, and wondered if IJCAI should consider it too. Something to think about from a mechanism design point of view.

But clearly, there seem to be other pressures pushing people towards hyper-paper-productivity even at the expense of spreading themselves too thin. Here is hoping that some resistance will be developed on this front.

Even in these days of  inflated resume expectations, it is perhaps more important to be known for a  contribution rather than a count; for a significant idea,  than for breaking a guinness book record on the number of papers, or aiming for  a Kim Kardashian-esque "famous for being famous" distinction. 

As always, your comments are welcome!

Rao

ps: There are some obvious implications of hyper paper productivity to the "spinning the wheel of fortune" phenomenon I talked about earlier; but I will leave them to your imagination

Disclaimer: Author anonymity was never compromised during this analysis. There is no  implied correlation--positive or negative--between the number of submissions and the number of acceptances. 


Addendum; A colleague pointed out that ICSE, one of the main conferences of Software Engineering, instituted a maximum 3 papers per author policy starting 2017. There seems to be very interesting discussion about the policy in that community

Tuesday, April 5, 2016

What's hot in ijcAI Redux

Here is the word cloud from the titles of the papers accepted  to the  IJCAI-16 technical program.

A big thanks to the IJCAI-16  Program Committee for a gargantuan job done well!

Rao


What's hot in ijcAI

A sincere "Thank you" to the Program Committee..

[The following was sent to the IJCAI program committee this morning]

Dear IJCAI-16 Program Committee members:

Last night, I sent out decision notifications to the authors of the
2,294 papers submitted to IJCAI this year.

For the past three months, I have had a birds eye view of the IJCAI
program commitee in action. For the last ten of those days, I had the
additional privilege of poring over the reviews and discussions of a
broad swath of the IJCAI papers.

IJCAI reviewing is by no means perfect; no conference on a subject as 

diverse as AI, and a 2000-strong program committee, can possibly hope to
be. So, part of the experience, no doubt, was like watching the
proverbial sausage getting made.

But then there is the other part. The heady and gratifying experience
of seeing the many amazingly conscientious reviewers hard at work.  I
have seen papers with close to thirty comments, and reviewers having
 
highly intellectual discussions stretching over multiple days about
the subject matter. I have seen quite possibly the longest meta-review
ever written anywhere in science. I have seen SPC members and Area
Chairs asking me to wait just a little more, so they can try and find
additional reviewers in the last minute to make sure a paper gets a
fair evaluation.

Above all, I have seen many many papers getting a quality of reviewing
that other fields can only dream about.


The whole experience made me remember afresh why I agreed to take on this
seemingly masochistic responsibility in the first place.

Thank you for all your help!  Let 
me end with this gem from one of the meta-reviews:

"The positive reviewers fought for the papers, and the negative
finally cheered up in their opinion. At the end, the paper is not a
homerun, but after all IJCAI-16 is a conference where people like to
discuss the results."



Warm Regards
Rao
---
Subbarao Kambhampati
Program Chair, IJCAI-2016
http://rakaposhi.eas.asu.edu

Monday, March 28, 2016

A federation of reviewing communities? Area-wise analysis of the amount of discussion on IJCAI papers...

I always thought that one of the defining characteristics of AI conferences is the significant amount of inter-reviewer discussions on each paper.

In  planning, for example, it is not all that unheard of to have the discussions that are as long as the paper (yes we are thinking of you,  **@trik!).

Having also handled many AI&Web papers over years, I did have a hunch that the amount of discussion is not same across areas.

Now that we have access to the reviews for the whole 2300 papers of IJCAI, we decided to see how the various areas stack up.

We counted the number of words in all the discussion comments for each paper, and then averaged them across each area.  Here is what we got:




So, papers in Planning & Scheduling, Heuristic Search, KR, Constraints, and MAS areas get significantly more discussion, compared to Machine Learning, NLP and AI&Web.

The AIW statistic is somewhat understandable as the reviewers there are not just from AI community and may have different cultural norms.

 The Machine Learning statistic, however, is worrisome especially since a majority of submissions are in ML. Some of the ML colleagues I talked to say that things are not that different at other ML conferences (ICML, NIPS etc). Which makes me wonder, whether the much talked about NIPS experiment is a reflection of peer reviewing in general, or peer reviewing in ML...

In case you are wondering, here is the plot for the length of reviews  (again measured in terms of the number of words across all reviews). Interestingly, AIW submissions have longer reviews than ML and NLP!



So you know!

Rao
(with all real legwork from Lydia, aka IJCAI-16 data scientist...)


Tuesday, March 22, 2016

Burning Man, IJCAI style.. (or assembling a ~2000 strong program committee from scratch for a one time task..)

Once every year, in the Black rock desert of Nevada, an entire city springs up to support Burning Man, the storied desert festival.  As the festival site says, Burning Man is a vibrant participatory metropolis generated by its citizens (only to be erased and rebuilt again the next year).

Then there is the haunting Tibetan Buddhist ritual of Mandala formation,  where an exquisite sand painting is created painstakingly over days, only to be erased once it is done.

I  think about these as I look at the 2000 strong IJCAI Program Committee winding down their reviewing and making final decisions on the  2300 papers submitted to IJCAI.

They gathered out of thin air over the last six months. For the main track, it started with me recruiting 44 area chairs, they  recruiting 340 senior program committee members, who then recruited 1400 PC members. All the recruitment was done through good old fashioned (and ultra low-tech?)  email.

Last week, we provided a mechanism for the program committee members to nominate their colleagues for exemplary reviewing and discussion. To date, we have already received 130  nominations. It is so gratifying to read the justifications accompanying these nominations.  Makes me proud to be a member of the community that takes reviewing responsibilities so conscientiously!  Something all the more heartening, considering the fact that peer reviewing is  not something that is explicitly incentivized by the normal performance recognition mechanisms! (In the coming weeks, I hope to share more metrics about the reviewing process.)

I wish I knew all the program committee members, so I could thank them personally. But, AI is just too large and diverse for that! So, instead this is my public thanks.

May be there are better alternatives that provide for a persistent program committee (IROS seems to do this). But I wonder if  we will miss the  Burning Man/Mandala feel with them.

Rao