29C3 - Version 1.9

F/a{hr-p).l//a,n
2.9/C-3

Referenten
Sebastian Schrittwieser
Programm
Tag Day 3 - 2012-12-29
Raum Saal 6
Beginn 13:15
Dauer 00:30
Info
ID 5077
Veranstaltungstyp Vortrag
Sprache der Veranstaltung englisch
Feedback

Ethics in Security Research

Recently, several research papers in the area of computer security were published that may or may not be considered unethical. Looking at these borderline cases is relevant as today’s research papers will influence how young researchers conduct their research. In our talk we address various cases and papers and highlight emerging issues for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed.

For researchers in computer security the recent success of papers such as [KKL+09] are an incentive to follow along a line of research where ethical questions become an issue. In our talk at the conference we will address various cases and papers and provide possible guidelines for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed. While some actions might not be illegal they still may seem unethical.

Key phrases that would be addressed in the discussion: (1) Do not harm users actively, (2) Watching bad things happening, (3) Control groups, (4) Undercover work. In the following, we introduce some lines of thought that should be discussed throughout the talk:

A first and seemingly straightforward principle is that researchers should not actively harm others. So deploying malware or writing and deploying new viruses is obviously a bad idea. Is it, however, ok to modify malware? Following the arguments of [KKL+08], one would not create more harm if, for instance, one would instrumentalized a virus so that it sends us statistical data about its host. Such a modification could be made by the ISP or the network administrators at a university network. If this modification makes the virus less likely to be detected by anti-virus software, the case, however, changes. Then this is analogous to distributing a new virus. A few quick lab experiments have shown that malware that is detected by virus scanners is very often not picked up after it has been modified.

Stealing a user’s computing and networking resources may harm her; however, if some other malware already steals the resources one could argue that the damage is less since the researcher’s software does “less bad things”. This is basically what the authors of [KKL+08] argue. So when taking over a botnet, generating additional traffic would not be permissible whereas changing traffic would be. The real-world analogue is that you see someone breaking in a house, you scare the person away and then you go in and only look around, for instance, to understand how the burglar selected the target and what he was planning to steal, which is “less bad” than the stealing what the burglar was probably planning to do.

There is a line of research when researchers only passively observe malware and phishing without modifying any content or receivers. When thinking of research ethics of “watching what happens”, the Tuskegee Study of Syphilis [W1] comes to mind. Patients were not informed about available treatments, no precautions were taken that patients did not infect others, and they were also actively given false information regarding treatment. Today it is obvious that the study is unethical. As done in [BSBK09] the best way is to ask people for their consent prior to the experiment. In other studies, involving, for instance, botnets, this procedure may be impossible as a host computer can only be contacted after sending modified messages. In a botnet study such as [SGCC+09] it seems both feasible and responsible to inform a user that her computer is part of a botnet. However obvious this may seem, there might be multiple users on an infected machine and informing an arbitrary user could cause some additional harm. For instance, the infection of an office computer may have been caused by deactivating the anti-virus software, surfing to Web pages not related to work, etc. Thus informing one person could cause another person to lose his job. While this is not as extreme as the “Craiglist experiment” [W2] similar impacts are conceivable.

For a cell phone provider we set up two honeynets, one open to the Internet and one accessible only to the mobile provider’s customers that use GPRS/UMTS data services. The goal was to analyze the trustworthiness of different devices. Management decided not to inform users whom we knew to be infected by certain malware; the rationale was that customers might feel being watched and would feel that their privacy has been invaded. Comparing this with a real world analogue of “watching without helping” (such as the circumstances of the murder of Kitty Genovese and the bystander-effect [MLC07] one may consider this to be unethical.

  • [BSBK09] Leyla Bilge, Thorsten Strufe, Davide Balzarotti, and Engin Kirda. All your contacts are belong to us: automated identity theft attacks on social networks. In WWW ’09: Proceedings of the 18th international conference on World wide web, pages 551–560, New York, NY, USA, 2009. ACM.
  • [GC10] Simson L. Garfinkel and Lorrie Faith Cranor. Institutional review boards and your research. Commun. ACM, 53(6):38–40, 2010.
  • [GSFT08] Steven J. Greenwald, Brian D. Snow, Richard Ford, and Richard Thieme. To- wards an ethical code for information security? In NSPW ’08: Proceedings of the 2008 workshop on New security paradigms, pages 75–87, New York, NY, USA, 2008. ACM.
  • [HME10] Markus Huber, Martin Mulazzani, and Edgar R. Weippl. Social networking sites security: Quo vadis. In Proceedings of the International Conference on e-Business (ICEB ), Minneapolis, MN, August 2010.
  • [HMSE10] Markus Huber, Martin Mulazzani, Sebastian Schrittwieser, and Edgar R. Weippl. Cheap and automated socio-technical attacks based on social net- working sites. In Proceesings of ACM CCS Workshops, Chicago, IL, October 2010.
  • [HMW10] Markus Huber, Martin Mulazzani, and Edgar R. Weippl. Who on earth is mr. cypher? automated friend injection attacks on social networking sites. In Proceedings of the IFIP International Information Security Conference 2010: Security & Privacy — Silver Linings in the Cloud, Brisbane, Australia, 2010. Springer LNCS.
  • [JJJM07] Tom N. Jagatic, Nathaniel A. Johnson, Markus Jakobsson, and Filippo Menczer. Social phishing. Commun. ACM, 50(10):94–100, 2007.
  • [KKL+ 08] Chris Kanich, Christian Kreibich, Kirill Levchenko, Brandon Enright, Geof- frey M. Voelker, Vern Paxson, and Stefan Savage. Spamalytics: an empirical analysis of spam marketing conversion. In CCS ’08: Proceedings of the 15th ACM conference on Computer and communications security, pages 3–14, New York, NY, USA, 2008. ACM.
  • [KKL+ 09] Chris Kanich, Christian Kreibich, Kirill Levchenko, Brandon Enright, Geof- frey M. Voelker, Vern Paxson, and Stefan Savage. Spamalytics: an empirical analysis of spam marketing conversion. Commun. ACM, 52(9):99–107, 2009.
  • [MLC07] R. Manning, M. Levine, and A. Collins. The kitty genovese murder and the social psychology of helping: The parable of the 38 witnesses. American Psy- chologist, 62:555–562, 2007.
  • [SGCC+09] Brett Stone-Gross, Marco Cova, Lorenzo Cavallaro, Bob Gilbert, Martin Szyd- lowski, Richard Kemmerer, Chris Kruegel, and Giovanni Vigna. Your botnet is my botnet: Analysis of a botnet takeover. Technical report, Security Group, De- partment of Computer Science, University of California, Santa Barbara, 2009. http://www.cs.ucsb.edu/ seclab/projects/torpig/torpig.pdf.
  • [Spi03] Diomidis Spinellis. Reflections on trusting trust revisited. Communications of the ACM, 46(6):112, 2003.
  • [Tho84] Ken Thompson. Reflections on trusting trust. Communications of the ACM, 27(8):761–763, 1984
  • [W1] http://en.wikipedia.org/wiki/Tuskegee Study of Untreated Syphilis in the Negro Male
  • [W2] http://en.wikipedia.org/wiki/Jason Fortuny#.22Craigslist Experiment.22
Archived page - Impressum/Datenschutz