Musings on Crafting a CFP

April 11, 2019

This post is part of a larger retrospective on REdeploy 2018; the discussion about our CFP process, specifically, was split out into its own post; read the full retrospective here.

Mary talked a little bit about our CFP, but I think it’s worth talking a bit more about because it’s one of the aspects of REdeploy I’m most proud of.

One of the lesser known REdeploy facts is that Mary and I originally didn’t plan to have a CFP: we were going to invite speakers. But there was such interest in a CFP that we ended up putting one together. There were a few aspects that were, I think, very unique:

  • Mary and I had invited half of the speakers before the CFP launched, so that constituted our input into the program. When we decided to do a CFP, we both felt it was important to have other trusted colleagues in the Resilience Engineering space have input into the program, which is why we put together an awesome CFP Committee to review submissions.

  • Mary and I both feel diversity and inclusion is important; she said we were striving for a 50/50 gender split, noting “While it took effort, this wound up being one of the simpler things that we accomplished...” I think she’s selling herself short here.

    One of the most interesting aspects of putting the program together is that we did naturally end up with a 50/50 split on the first review of the CFP results, but a ton of work went into setting up the CFP and I think that effort directly facilitated that outcome.

  • The biggest chunk of work was anonymizing the CFPs: we both did a pass through the submissions, removing names and gender references. But Mary did a much more thorough job, removing company names and references to years of industry experience, and other factors that could bias reviewers. I truly believe that removing these details, so reviewers were focused on the proposed content of the talk made that 50/50 split fall out of the process naturally in the way that it did.

    There’s an important lesson here, though, which is that the work to do this isn’t particularly difficult or complex... but it does require you be deliberate about it and make the up-front effort. When you do so, the results can be amazing, without any tweaking necessary. And while we were prepared to step in and make the program reflect values we think are important, especially in a resilience context, we ultimately didn’t have to do that... which I found heartening.

  • We used a fairly simplistic process for CFP reviews, involving PDFs and Google Forms, but it had some fairly interesting elements which are worth highlighting because so many (even big name!) conference get this just soooo wrong.

    First of all: we did not let any CFP reviewers see any other comments from other reviewers. I have been on other CFP panels and visible comments from other industry leaders have caused me to change my vote, which… is just a form of bias. And it embarrassed me, actually, but… that’s human nature. We engineered this CFP process so that specific type of bias was not possible.

  • Secondly, we split up all of the CFP submissions into three distinct buckets; we then asked the reviewers to review at least one bucket. They could review two or all three buckets if they so chose, but we did ask that they reviewed the entire bucket. This was to combat the problem that most CFP review processes dump hundreds of submissions on reviewers, and ask them to “review as many as they feel like they can.” This results in a lot of reviews for submissions that sort up high (whether by submission time, talk title, or some other sorting key) and less for those at the bottom of that list. Also, submissions at the bottom often receive biased scores because CFP reviewers are tired or just trying to get through the list, another well-known form of bias.

    We feel this was a better experience for the reviewer panel (everyone ended up only reviewing a single bucket) and we feel it was a fairer, more balanced approach for all the submissions.