Wednesday, April 13, 2016

The (increasing?) practice of expanding co-author list after paper acceptance...

Something funny happened on Easychair the day after acceptance decisions were sent.

We noticed that a lot of people have logged in and added additional co-authors to their--now accepted--papers. In many cases 2, 3 and even 4 authors were being added!

Since we had kept backup snapshots of Easychair every week during the review period, we knew the original author lists for all the papers (which is how they are being listed on the accepted paper list ).

I understand that sometimes we may get non-trivial help from a colleague after paper submission that qualitatively changes the camera-ready version, thus legitimately necessitating author list expansion. We have done this ourselves a couple of times in our group.

Still,  it is a bit surprising that  50+  papers suddenly found this need. Even more surprising that many of them  found that they had additional help from not just one but "multiple" forgotten co-authors. Clearly the adage--that success has many parents, while failure is an orphan--seems to be playing out in spades here ;-)

So we decided to look into this phenomenon more closely by requesting authors to provide a justification as to why the new co-authors need to be added.  We are now getting mails from even more people, with requests to add 1, 2 or 3 co-authors.

The justifications range from the quite reasonable ones, such as

"I forgot to give credit to an undergraduate intern who helped with the work"

"X helped me prove an additional theorem" 

to the  somewhat questionable

"X helped with the rebuttal"

"X financially supported this work"

"X, Y and Z just got permission from their companies to join as authors"

"X wants to come to the conference, and thus would like their name on the paper" 

"because X is a respected researcher in our university"

to the utterly  inexplicable

"I have forgotten about listing any of my co-authors".

"We didn't put some of the authors when submitting for blind review"

"When trying to beat the submission deadline, we wanted to save time by listing just one author"

It seems to me that the situation has come a long way from adding authors when legitimately needed to deliberately keeping original author lists incomplete.

There may also be some implicit cultural norms at work here, in as much as over 90% of the papers expanding author list after acceptance are from a specific region.

Large scale author list modification post-acceptance does pose several quandaries for the conference. In addition to the obvious intangible long-term ones such as  cheapening co-authorship, there is the more immediate and tangible  one: We rely on author lists to ensure that conflicts of interest situations are avoided. It becomes very hard to do that if the author lists are fluid and subject to massive changes after acceptance.

In other areas, such as Signal Processing and Computer Architecture,  changes to  the author list post-acceptance is not allowed. Period.  (Granted, some of these conferences also have limitations on the number of submissions any single individual can be a part of, and fluid author lists defeat the limitation by allowing a backdoor for hyper-prolificity ).

Even if we don't want to be quite so strict, it does make sense to discourage the practice of keeping initial author list  deliberately incomplete. Perhaps AI conferences should emphasize the obvious at submission time: that author lists are expected to be complete  at the time of submission.


Saturday, April 9, 2016

Hyper Paper Productivity (or the most prolific authors of IJCAI submissions..)

The other day we were compiling statistics on IJCAI submissions, and I was struck by some authors who seemed to be a co-author on a rather unbelievably  large number of submissions.

At first I thought that the inflated numbers might be because the analysis is mistakenly merging multiple different people who happen to have the same name. 

So we redid the statistics, this time taking the author contact email addresses to resolve the identity. Surprisingly, the statistics on the hyper-prolific authors remained unchanged.

Being of the "basically lazy" kind, this level of productivity was beyond my meager ken. So I asked some of my less lazy colleagues to take a guess on how many submissions the most prolific co-authors had.

Unfortunately, my colleagues seemed to be just as behind times as I am. None of them could hazard a guess that is even within 50% of the true number. This even after I gave them a couple of chances to improve their guesses.

Clearly I need to associate with more productive colleagues! Be that as it may,  let's come back to these hyper-productive authors. 

The number of papers co-authored by the most prolific co-author in the IJCAI  turns out to be.. DRUMROLL  please... 


Yes, you heard me right. *thirty two*.   And no, this is not a single outlier. Next in line were authors with 



co-authored submissions respectively.   Without giving anything away, let me say that all these prolific authors are writing papers in  applied ML--an area that also has the maximum number of submissions to IJCAI). 

When I shared these numbers with my (admittedly lazy) colleagues, they countered that the number of papers accepted from these would surely be close to zero--as after all who other than a Map-Reduce program can churn out so many papers?

Well, little do these colleagues know! These papers are not the bottom of the barrel.  About the only damning thing that can be said is that an overwhelming majority were basically borderline. 

Which made me wonder--what if these prolific authors decided to focus their energies on fewer papers with potentially higher quality. (Reminded me of a colleague's backhanded compliment of one of his PhD students: "Having him is like having 10 mediocre students!")

Some colleagues pointed out that many IEEE conferences put an upper-bound on the number of submissions from any one author, and wondered if IJCAI should consider it too. Something to think about from a mechanism design point of view.

But clearly, there seem to be other pressures pushing people towards hyper-paper-productivity even at the expense of spreading themselves too thin. Here is hoping that some resistance will be developed on this front.

Even in these days of  inflated resume expectations, it is perhaps more important to be known for a  contribution rather than a count; for a significant idea,  than for breaking a guinness book record on the number of papers, or aiming for  a Kim Kardashian-esque "famous for being famous" distinction. 

As always, your comments are welcome!


ps: There are some obvious implications of hyper paper productivity to the "spinning the wheel of fortune" phenomenon I talked about earlier; but I will leave them to your imagination

Disclaimer: Author anonymity was never compromised during this analysis. There is no  implied correlation--positive or negative--between the number of submissions and the number of acceptances. 

Addendum; A colleague pointed out that ICSE, one of the main conferences of Software Engineering, instituted a maximum 3 papers per author policy starting 2017. There seems to be very interesting discussion about the policy in that community

Tuesday, April 5, 2016

What's hot in ijcAI Redux

Here is the word cloud from the titles of the papers accepted  to the  IJCAI-16 technical program.

A big thanks to the IJCAI-16  Program Committee for a gargantuan job done well!


What's hot in ijcAI

A sincere "Thank you" to the Program Committee..

[The following was sent to the IJCAI program committee this morning]

Dear IJCAI-16 Program Committee members:

Last night, I sent out decision notifications to the authors of the
2,294 papers submitted to IJCAI this year.

For the past three months, I have had a birds eye view of the IJCAI
program commitee in action. For the last ten of those days, I had the
additional privilege of poring over the reviews and discussions of a
broad swath of the IJCAI papers.

IJCAI reviewing is by no means perfect; no conference on a subject as 

diverse as AI, and a 2000-strong program committee, can possibly hope to
be. So, part of the experience, no doubt, was like watching the
proverbial sausage getting made.

But then there is the other part. The heady and gratifying experience
of seeing the many amazingly conscientious reviewers hard at work.  I
have seen papers with close to thirty comments, and reviewers having
highly intellectual discussions stretching over multiple days about
the subject matter. I have seen quite possibly the longest meta-review
ever written anywhere in science. I have seen SPC members and Area
Chairs asking me to wait just a little more, so they can try and find
additional reviewers in the last minute to make sure a paper gets a
fair evaluation.

Above all, I have seen many many papers getting a quality of reviewing
that other fields can only dream about.

The whole experience made me remember afresh why I agreed to take on this
seemingly masochistic responsibility in the first place.

Thank you for all your help!  Let 
me end with this gem from one of the meta-reviews:

"The positive reviewers fought for the papers, and the negative
finally cheered up in their opinion. At the end, the paper is not a
homerun, but after all IJCAI-16 is a conference where people like to
discuss the results."

Warm Regards
Subbarao Kambhampati
Program Chair, IJCAI-2016