Yesterday, Apple held one of its Keynote events. Amongst talk of super-thin minimalist laptops and $10,000 luxury smart watches, we were also very excited by the announcement of ResearchKit. This interests us both as Science Practice and as part of SP+EE where we work on healthcare-related projects, including design and development of software for medical research.

We spent some time today delving a bit deeper into ResearchKit, so here are a some initial notes along with a few open questions that we hope will be answered soon:

  • ResearchKit is a software framework that allows researchers to run medical studies using the participant’s own iPhones as the primary means of gathering data. The idea is that ResearchKit should make it easy for medical researchers to collect more data from larger and more varied study groups, more frequently. This point was quite nicely illustrated in a tweet by John Wilbanks:


  • Importantly, the code will be open source meaning that, when it is released, anyone will be able to download and start building on top of the platform. Hopefully this will encourage a community of developers and researchers to form and share useful modules. These might not be limited to software alone, but also reusable design patterns, ethically approved content and best-practices.

  • ResearchKit has a technical overview document which can be found here. This document contains information about the main modules ResearchKit will have. These are designed to reflect the most common elements within a clinical study: a survey module, an informed consent module and an ‘active tasks’ module.

  • Interestingly, the technical overview documentation also has a list of things that ResearchKit doesn’t include, such as the ability to schedule tasks for participants. We’ve found scheduling tasks and reminders to be a more-or-less essential feature in similar types of study we’ve been involved in, so this is probably on Apple’s todo list.

  • Apple also states that it doesn’t include automatic compliance with research regulations and HIPAA guidelines. This places the responsibility firmly on the researchers. However, as the developer and research community around ResearchKit establishes itself, perhaps best-practices will be developed and shared to help streamline compliance in a range of international regulatory contexts.

  • In an interview in Nature, Stephen Friend, president of Sage Bionetworks states that “At any time, participants can also choose to stop. The data that they have contributed stays in, because you don’t know who they are”. This standpoint doesn’t quite chime with us. Perhaps other consent models will be possible where a participant can optionally remove their data from a study if they decide to leave. It’s their data after all.

  • In the same article, Ray Dorsey from University of Rochester in New York comments on the the most obvious problem with ResearchKit: sampling bias. He points out that “the study is only open to individuals who have an iPhone, specifically the more current editions of the iPhone”. We can imagine a port of ResearchKit to Android popping up fairly quickly once the source code is released, but bias will always be an important consideration in studies that require participants to have smartphones, regardless of the manufacturer.

  • It will also be interesting to see how the peer reviewers will interpret the quality of the data collected by the ResearchKit enabled apps when the results begin to be published.

It looks like a fascinating platform and we will be watching eagerly as it develops and maybe even jumping on-board too.

After our call for a Challenge Prizes intern a week ago, we have had a number of fantastic applicants. Thank you to everyone who got in touch!

One of the more unexpected enquiries we received came from Matteo Farinella. Matteo has a PhD in Neuroscience from UCL and has since been working as an illustrator creating info-comics and scientific illustrations. We have been following Matteo from afar over the last few years: admiring work like Neurocomic and noting his recent success in the Vizzies, so it was great to hear from him.

Matteo will be joining us for a few months to support us on an exciting new Challenge Prizes project. One slight problem: Matteo is grossly overqualified to be an intern. So, while Matteo will be helping with the research we originally needed an intern for, he’ll be doing a whole lot more too.

Introducing Matteo

Update 05/03/2015: Call now closed. Thank you to those who applied!

#We’re looking for a research intern with a passion for technology and innovation to join our team.

  • Salary: Paid Internship
  • Location: 83-85 Paul Street, London EC2A 4NQ, UK
  • Term: 3 months
  • Hours: Full-time
  • Starting date: Immediately

In case you haven’t met us yet - we’re Science Practice, a design and research company based in London. We work across a variety of areas but central to our mission is creating an explicit role for design in scientific practice. Whether we’re designing challenge prizes, prototyping microfluidic chips or creating new methods for visualising genetic data, we’re always looking at ways to integrate design principles with the processes and methodologies of science.

We are a multidisciplinary team of five with very diverse backgrounds and skills ranging from design and business, to biomedical engineering, but with a unifying passion and curiosity for scientific innovation.

We are looking to recruit a recent university graduate to support a project to create several challenge prizes. This is a really exciting area that involves futures research into new technologies and disruptive innovations able to provide a solution to some of today’s most pressing problems. The challenge prizes will focus on different areas such as robotics, biometric security, quantum computing and digital currencies. We are looking for someone knowledgeable, inquisitive and passionate about new and emerging technologies and the potential they hold.

#The role will involve:

  • Researching technological innovations and their impact on addressing social problems;
  • Writing key research findings and helping prepare external project documentation;
  • Supporting expert engagement work - this includes identifying key experts in a specific field, arranging and taking part in interviews, and synthesising main ideas;
  • Working closely with a team of international partners in designing the structure of several challenge prizes;
  • Completing important and often time sensitive ad hoc project tasks.

#The person we’re looking for:

  • Passionate about technology and its potential for social impact;
  • Interested in innovation and tools for supporting innovation such as challenge prizes or competitions;
  • Strong academic credentials - recent graduate of a Bachelors or Masters (preferably in a technology related degree), with a 2.1 or higher;
  • Strong research skills - ability to understand and synthesize complex information quickly;
  • Strong communication and writing skills;
  • Organised, with a close attention to detail;
  • Work efficiently, be flexible and manage priorities to meet deadlines;
  • Self-motivated and inquisitive - an ability to work autonomously but seek advice when needed;
  • Able to work on location in London, UK;
  • (And if you really want to tick all the boxes): design/programming skills.

If you’re interested in applying for this role, please send an email with your CV to Ana at af@science-practice.com. Thanks and looking forward to hearing from you soon!

In February 2014 we started work on the Longitude Prize. How we approached this project and what we learned along the way is the topic of the following posts:

After almost four months of interviews with over 100 experts and multiple design iterations we had accumulated a lot of valuable knowledge; the next step was to synthesise and validate this knowledge with a wider expert audience. We began writing up the six challenge prize design proposals into ‘challenge reports’.

Writing these initial reports wasn’t an easy task. We wanted them to offer readers a guided explanation of the decisions made during the research and design process. We wanted to show where there was a clear consensus, but also highlight areas that were still in need of further discussion.

To do this we structured the reports to reflect our research process. Reports began by describing the broad problem area, then gradually started focusing in on the challenge area, the role the challenge prize would play in addressing the core problem, and the types of solutions encouraged. This gradual ‘zooming in’ allowed us to present our arguments and the decision-making process behind the proposed design. It also meant that a contentious element or argument could be traced back to an initial decision point that could then be discussed.

decision

Diagram showing the decision points for the Antibiotics prize with our recommended option at each stage

Challenge Reports in Practice

Once written, we validated these reports with experts. This time around, we wanted experts to act as reviewers. We wanted them to understand that the document we were presenting to them was close to a final challenge prize design, but we still wanted them to actively contribute to its structure. For this purpose, we scattered questions throughout the reports and added an Appendix with key issues other experts brought up in previous conversations.

Although we are aware that there is nothing particularly original about this process, it was very valuable for us as it drew attention to misunderstandings and oversights on our part.

After several iterations each challenge area had an accompanying report that narrated the journey of designing its challenge prize and the decisions made along the way. All we needed to know now was which one of these challenges was going to become the Longitude Prize 2014.

reports

All six Longitude challenge reports

And the Winner is…

And the Winner is…

Dr Alice Roberts announcing the result

On 25 June 2014 Antibiotics was announced as the winner of the British public’s vote to become the topic of the Longitude Prize. Following this announcement, a decision was made to share the Antibiotics challenge report with the general public to get a broader input on the proposed prize structure.

To make the report more accessible we included some additional features. We added illustrations for some example diagnostic tools to make the types of solutions sought more concrete. We included additional visualisations to support the parameters and, most importantly, we added a diagram of the prize assessment process to help competitors understand what is expected from them at the different prize phases.

assessment

Diagram explaining the Longitude Prize assessment process

Following these changes, the Antibiotics challenge report was published on the Nesta website with the aim to engage the general public in discussions around the Longitude Prize and get their feedback on the structure of the Antibiotics challenge. Based on this report and the feedback from the open review the final Longitude Prize 2014 Prize Rules were created.

Wrapping up the Longitude Project

The experience of researching and designing the six candidate challenges for the Longitude Prize was a very valuable one for us. It gave us the opportunity to explore new ways of engaging with experts and learn how to make best use of their expertise.

In our write up of this project we have placed an emphasis on the design proposals we used to prompt discussions. We did this to highlight their role in supporting more focused, specific and constructive discussions. At a more fundamental level, this process meant that the researcher took responsibility for the creative quality of the ideas being discussed - something that can often lack in research that seeks to canvas the opinion of many stakeholders.

Now that our work on the project is done, we’re eagerly looking forward to seeing the innovations the Longitude Prize 2014 will bring!

You can read the full Antibiotics challenge report here and the Prize Rules here.

Think you’ve got an idea about how to solve the Longitude Prize 2014? Register your team here. Good luck!

In February 2014 we started work on the Longitude Prize. How we approached this project and what we learned along the way is the topic of the following posts:

In our previous Longitude Prize post we introduced challenge mapping which was our process for gaining enough understanding of the six challenge areas to start designing the challenges themselves. This post is about prototyping and testing different formulations for each challenge.

Similar to the previous research phase, we could only do this by talking to experts. This time around, we wanted them to think like competitors. We needed to understand whether the challenges we were proposing were right. Would they inspire healthy competition and novel solutions from a wide field of participants? Were they good problems?

Before we started the next round of interviews we wanted to create a proposal to stimulate discussion. So we came up with something called a challenge prototype.

The structure of a Challenge Prototype

Despite the fancy name, challenge prototypes are pretty simple, one-page documents which summarise a challenge. They state the vision of the challenge, define the problem to be solved, set the goal to be attained, detail the judging criteria and, lastly, spell out the logistics of taking part: deadlines and prize money.

all prototypes

All six Longitude challenge prototypes

By talking through these prototypes with experts we wanted to validate the conclusions we drew following the challenge mapping phase as well as get a better understanding of how competitors would approach such a prize. We wanted to move from the hypothetical to the concrete and get into the details.

Understanding competitors

Each interview started with the problem and goal statements in the prototype. When an expert disagreed with the proposed challenge, discussing these two statements helped us understand whether this was due to our framing of the problem, or the solutions we were expecting.

The most detailed conversations generally took place around the judging parameters and timelines. We wanted to know whether the judging criteria made sense and whether they were objective enough for solutions to be assessed against them. Equally important, we wanted to know if the challenge had a reasonable chance of being solved in the given timeframe and understand what kind of support could motivate and encourage innovators along the prize journey.

Stating the judging criteria with specific targets and limits - even if these weren’t necessarily the right ones – helped engage experts in detailed conversations around what form potential solutions might take and how they could be assessed. This allowed us to get a feel for the dynamics between the individual criteria and how they fit together.

One of the unexpected benefits of having the challenge written down were the corrections we received to our (often clumsy) use of specialist terminology. These are the types of mistakes that don’t get picked up in conversation, but stood out to experts on paper.

The last question we asked the experts we interviewed was whether they would take part in the challenge as described. If the answer was ‘yes’, then this was a positive sign that we were getting close. If the answer was ‘no’, then this was a good opportunity to ask why.