The survey consisted of two major parts, one about current code
sharing policies and the second about future policies.
(Current) Conference Code Sharing Policies
For the first survey part about current code sharing policies, 245 emails were
sent to organizers of 125 conferences. 34 responses (14%) were received from
chairs of 32 conferences (26%).

The first three responses in the graphic's list are pre-formulated.
The first one was never chosen.
Other answers offered, formatted for style, were
- Authors of accepted papers are invited to submit an artifact for their paper, which is checked to see whether it supports the claims in the paper. Accepted artifacts are invited to make their artifact available in a repository.
- There is no explicit policy, but there is a culture in the field with the expectation that researchers will do so. I have seen papers rejected when the authors do not make their code available (no one is checking that it is still available after review, however, but I am unaware of any instances when someone released their code for paper review and then took it down after the paper was accepted.
- There is an option to have the code or artifact evaluated, but nothing about making it public post-conference. I am not sure we would necessarily want to enforce that since some people are writing proprietary code.

The first four responses in the graphic's list are pre-formulated.
The third, forbids, was never chosen.
Other answers offered, formatted for style, were
- Not applicable — the artifact evaluation process runs only after papers are accepted (potentially with shepherding, but shepherds are not integrated with the artifact evaluation process). There is no policy on how reviewers should treat promises to open-source code in papers during review. Double-blind reviewing makes it effectively impossible to link to already public code.
- There is no such policy, but authors who submit artifacts can have those evaluated. Not all papers have artifacts to evaluate though.

The first five responses in the graphic's list are pre-formulated.
- Badges (see for example https://www.acm.org/publications/policies/artifact-review-badging)
- Qualification for prizes or awards
- Extra pages in conference proceedings
- Extra time for submission
- None
Other answers offered, edited for style and clarity, were
- ACM badges after successful artifact evaluation
- Indirectly, the chances for acceptance are increased, since the contribution is considered to be more significant
- Due to the subject of the conference, the use of "research code", i.e. software used as part of a research project, is very rare. Yes, if a paper does depend on research code, as a referee I would insist on it to be publicly available, and as PC chair I would [steer] towards rejection if it were not. So, the reward would be that the paper got accepted.
- Best paper prizes cannot be awarded unless code is available
Future Conference Code Sharing Policies
For the second survey part about future code sharing policies, 238 emails were
sent to organizers of 123 conferences. 21 responses (8.8%) were received from
chairs of 21 conferences (17%).

The first three responses in the graphic's list are pre-formulated.
Other answers offered, formatted for style, were

The first three responses in the graphic's list are pre-formulated.
Other answers offered, formatted for style, were

The first four responses in the graphic's list are pre-formulated.
Other answers offered, formatted for style, were

The first four responses in the graphic's list are pre-formulated.
The third, forbids, was never chosen.
Other answers offered, formatted for style, were

The first six responses in the graphic's list are pre-formulated.
- Funding agencies requiring code sharing for new grants.
- Universities taking code sharing into account for promotion and tenure.
- Conferences offering more tangible benefits to authors who share their code, like badges, prizes, pages, etc.
- Development tools improving to make code sharing easier.
- Conferences requiring that published papers indicate whether code is shared or not.
- There is no need for any measures.
Other answers offered, formatted for style, were
- See comment below.
-
Something like ArXiv for code where code gets its own DOI and gets indexed like publications, allowing for "published" and updated versions.
Maybe a safe sandbox for reviewers to download and view things from untrusted sites, so peer reviewers can actually check on the state of code sharing.

The first six responses in the graphic's list are pre-formulated.
- Funding agencies will require code sharing for new grants.
- Universities will take code sharing into account for promotion and tenure.
- Conferences will offer more tangible benefits to authors, like badges, prizes, extra pages, etc.
- Development tools will improve to make sharing easier.
- Conferences will require that published papers indicate whether code is shared or not.
- None of the measures above.
Other answers offered, formatted for style, were
- See comment below.
- Some conferences will encourage papers to indicate whether code is shared or not.
The request to "Please add any comments, suggestions, or objections here." provoked these reactions,
which have been anonymized and edited for style:
I wish that major computer science conferences (like [conference]) would make code sharing (and verification)
mandatory. There are too many fake, misleading, bogus results, that make it difficult for honest researchers
to publish new work, as they cannot beat the unverifiable but published "state of the art".
Most of the time that my students try to duplicate a paper's results, we get nothing close to what they report.
-
-
The above is happening / has already happened in many systems conferences. It's not 5 years from now; it's 5 years ago: policies, badges, awards, artifact evaluation as part of the process, etc. For example, take a look at https://www.sigplan.org/Resources/EmpiricalEvaluation/ -- SIGPLAN started it a few years ago, and from there it has spread to other SIGs.
-
"Universities will take code sharing into account for promotion and tenure" -- Is this a serious question? If so, why stop there? How about adding an option stating that "researchers will be arrested and shot if they don't share"?
-
"Developing tools will improve to make sharing easier" [*specifically for academic publications*] -- Again, I'm confused by the question. Are there not use-cases orders of magnitude more compelling to motivate improved sharing? Is your thesis really that humanity will dedicate resource for improved sharing *especially* for academics?
-
Copy-pasting text between survey questions, rather than simply pointing out the differences (e.g., "please answer the previous question, but now assuming you are a dictator") doesn't seem very respectful of the time of the researchers who fill out this form.
-
Industry frequently can't share code/data. There's nothing you can do about that except decide you don't want to accept their papers, which would be foolish.
-
Overall, with appreciation to your effort, the wording of this survey seems to suggest that you didn't do your homework and aren't really aware of what's going on. The survey doesn't convey seriousness. Rather, it seems amateurish, and parts of it seem silly.
[Please note that FindResearch.org is aware of badges being granted as early as 2011 and similar verification statements being made as far back as 2008.]
In my area most people are happy to share code. If they don't, it is because it is so badly written it would not be very useful. Higher chances of paper being accepted is enough of an incentive for authors. But reviewers can decide what is best on a case by case basis, without strict policies.
I think the badge program is excellent. It creates the right incentives for people to make code available and an environment where people will actually try to recreate your results.
There are legitimate reasons for code not to be public, so I would avoid requiring code release. I think code availability "counts" for review purposes, but it should be the default, so I think it only makes a difference if the code is an open source version of a tool that is important but has been only available to research labs, or is a new benchmark suite.
Since my conference deals largely with theoretical work, it is exceedingly rare that any submitted paper actually makes use of research code of any kind. For this reason, special policies may be exaggerated. But in case a paper does crucially depend on research code, and doesn't make it publicly available, as referee and/or PC chair I'd find that a clear reason for rejection. Not so much because there would be policy formulated or a form to be filled in, but rather because it is unsound scientific practice to claim results that cannot be verified by the reader. Rather than more checkboxes on bureaucratic forms, I'd see this as a matter of scientific integrity that is part of a scientist/community's reputation, and thrives through people citing and building upon good science.
Keep going this is good work!
[Conference] already has an artifact evaluation process that rewards code sharing and result reproducibility with badges.