The first KEF Forum took place on 7 June 2018 in the representation of the federal state of Saxony-Anhalt in Berlin. It was targeted, above all, to the contact persons responsible for the handling of security-relevant research, and members of the already established KEFs.
The workshop centred on sharing experiences on the obstacles of establishing a KEF, on questions raised regarding consultation on security-relevant research projects and their assessment, on harmonising the procedure for dealing with security-relevant research and for raising awareness of the potential misuse of research findings and methods.
In the first session, members of various committees and bodies that deal with the ethical issues of security-relevant research presented the experiences and the results of their work. Siegfried Bien (Philipps-Universität Marburg) presented the work of the ‘Research and Responsibility Committee’ at Philipps-Universität Marburg and explained why the university decided to set up the committee in the first place. Jens Teifke (Friedrich-Löffler-Institut, FLI) talked about how the FLI’s Biorisk Committee works, the cases it has dealt with so far, and the assessment criteria used. Cornelia Reimoser (Fraunhofer-Gesellschaft) detailed how the Fraunhofer-Gesellschaft addresses security-relevant research. In addition to ethical consultation per email and telephone, the Fraunhofer-Gesellschaft has tested an ethics screening system for preliminary research and plans to set up an ad-hoc committee in autumn 2018. Petra Gehring (TU Darmstadt) reported on the establishment at the TU Darmstadt of a procedure to implement the civil clause and the positive response this procedure has received.
In the second session of the workshop, participants discussed in groups how KEFs could address and evaluate three concrete examples of security-relevant research projects that have been brought before the committees. The case studies used were the production of synthetic infectious horsepox viruses, artificial intelligence methods for identifying and rectifying software vulnerabilities, and deep learning algorithms to predict sexual orientation using portrait photos. The group work showed that KEFs can select different methods to evaluate the security relevance of research projects and that the evaluation should always be a discursive process.