There was a great article in the newspaper this week about a Melbourne hospital that has found an interesting new use for iPads.
The hospital is using Facetime to connect new mothers with their premature newborn babies, who are being cared for in the Neonatal Intensive Care Unit. The mothers are themselves unwell and confined to a hospital bed. They would normally have to rely on information relayed from partners and family members to find out how the baby is doing, what the NICU environment is like, and what medical care the baby is receiving. Using iPads, mothers can be virtually present at the baby's bedside. They can talk to their baby and to the medical staff caring for the newborn.
This is a simple solution - a new use for an existing technology - that is a great example of technology being used to address a real need.
I have been involved in many conversations lately about ethics and new technologies. The words “ethics” and “new technologies” bring to mind issues like personal data and surveillance, evoking the fear that big brother is watching everything we do online. But it’s not just government and big business that might record our online activities. Social media provides researchers with fantastic new opportunities to study people: all those posts, tweets, YouTube videos, etc, provide vast quantities of data that is free and often publicly available for researchers to use.
I read an interesting paper on this topic recently by Michael Henderson and colleagues, “Silences of ethical practice: dilemmas for researchers using social media”. The paper reviews published studies that have examined young people’s social media practices, with a view to identifying the ethical issues that have been discussed and addressed by researchers working with this material. The review found that ethical issues were conspicuous in their absence. Data obtained from social media is often publicly available, so there is a typically an assumption that consent is not required (or cannot be obtained in any case). Researchers, then, rarely included any considerations of ethical issues in their published reports of these studies.
Henderson and colleagues convincingly argue that more consideration needs to be given to the ethics of conducting research in this way. “Publicly available” does not necessarily imply consent to use the data in research. In social media, there are blurred boundaries between “public” and “private”. Just because somebody posts something online does not mean they want or expect it to be used by people other than the intended audience. A person may share a photograph, video, or status update, but only expect it to be viewed by their circle of friends - not to be broadcast more widely.
Then there is the issue of “traceability”. If a researcher quotes directly from the data they collect online, the anonymity of the source will be compromised. Running the quote through a search engine could possibly identify the person who originally posted that information. If anonymity is to be preserved, then, quotes have to be changed or not used at all.
This issue extends beyond using data collected online. Protecting anonymity is becoming more difficult, as search engines become more sophisticated. The people and organisations who contribute to research are more likely now to have an online presence that could be linked to the information they share with researchers. I attended an interested presentation last year on the topic of “un-googling” publications, which addressed the issue of preserving anonymity in published research. More recently, Pat Thompson has written a blog post on this topic, in which she notes that questions of anonymity, consent and confidentiality “will become more and more tricky the more we amass digital footprints”.
In some research contexts it is not always possible or desirable to preserve anonymity. In one project that I have been involved in recently, many participants requested to use their real names - they want to be associated with their contribution to the research. This brings to mind issues of ownership over research data. Should participants have intellectual ownership over the words they produce or content they create for a research project? This can be particularly contentious when the research data consists of content people have created and shared online. They may have shared this for a particular purpose and a particular audience - not for the purpose of contributing to a research project. It seems that conversations about ethics and research using new technologies need to go beyond identifying strategies to preserve anonymity/privacy and consider some of the complexities of identity, ownership, and consent over the use of online data.
Dr Jenny Waycott, Associate Professor, School of Computing & Information Systems, The University of Melbourne
Contact: jwaycott @ unimelb.edu.au