On 10th May, I went up to London for the seventh Overleaf Future Pub event, hosted by Digital Science (and then on to the Scholarly Social meet-up in a pub down the road afterwards to continue the conversations).
There were seven speakers, presenting their modern publishing initiatives, in lightning-fast five minutes of information overload.
The whole event was very interesting, I took loads of notes, and although it’s taken me a while to get them down, I have finally written this post up about them all, including my own opinions on the proposals and ideas the speakers where discussing.
The seven speakers represented:
My full observations and thoughts follow.
Reimagining scientific news: How user research led to an entire product redesign, by Sybil Wong and Mimi Keshani
@SybilCKW and @MimiKeshani
Sparrho is a scientific content aggregation site that allows users to make collections of article abstracts from across a huge range of publishers. Users can like abstracts, pinning them into collections of papers, make comments on them and share with friends. It is a personal curation site similar to Pinterest. Mimi talked us through some of the major changes they have been making to the site in response to user feedback, including new and improved keyword filtering. Seems like a really neat site, and one that could facilitate some good conversations around papers, should enough people get involved with it.
Peer-to-peer recognition from author to reviewer, by Laura Harvey
Laura spoke about a potential collaborative project she is organising, aimed at facilitating author feedback and personal thanks to the reviewers who commented on their work. This is based on the notion that there is no system in place for authors to provide any gratitude to reviewers for their work. She put out a request for publishers, peer review platform developers, editorial staff and editors to get involved in the project.
Now, whilst I think the motives behind this project are admirable - I advocate for personable communication during the peer review process myself – there seem to be several limitations to this idea which I think prevent it from working.
Firstly; reviewers tend not to provide their reviews for the benefit of authors per se, rather, for the benefit of the journal, to ensure that it will accept a particular standard or scope of work. The recommendations reviewers provide are to ensure the manuscript is of sound science. Although this will help the author, sometimes significantly so, famously, some reviews go slightly further than this, and not all comments are particularly appreciated!
Secondly; As there is not a platform to facilitate this project, there would be much manual intervention required, and the key to its success would be widespread adoption. At present, the number of manual tasks involved in the workflow would render it unscaleable. As many (most?) journals operate some form of blinded review, direct correspondence would not be readily possible. Editorial offices would be required to serve as the middle man between communications, and possibly vet all correspondence.
Finally; there already are ways in which authors show their appreciation. From my experience, authors email editorial offices to pass on thanks, and sometimes add an acknowledgement in the published manuscript, and sometimes this addition is vetoed by journal Editors who deem it inappropriate to publish.
A noble idea then, but one which may be very difficult to put into practice.
Peer to Peer Science, by James Littlejohn
James gave an especially passionate but incredibly technical (for my, suddenly, tech-illiterate mind) talk about the potential for the application of blockchain technology to science. Dsensor uses peer-to-peer data collection to create a computationally derived consensus ‘truth’ though a mapping protocol. Systematically cross-referencing patterns across vast data sets to fine-tune consistencies within them, until no alternative patterns can be found, and thus producing a truth from data.…….I think.
I can only be honest, and admit to being defeated by the advanced technological free-thinking after this point. I will leave you with the video from the DSensor site, which gives a two-minute summary of the ideas.
Automating peer review for research by Daniel Shanahan
Several years ago, I attended an ALPSP conference where Cameron Neylon – Advocacy Director for PLoS at the time - spoke about the possibilities of creating machine-based peer review that could check for standardised requirements such as statistical correctness in manuscripts, taking out some of the error-prone drudgery of peer review, allowing the reviewer more space to provide human input into the process without distraction. I found this idea very appealing, and have awaited the day, when this would become reality, and referees would no longer be tasked with heavy duty stats crunching. Daniel Shanahan presented a fruition of this idea, in a new automated process for BioMed Central medical journals, using text-mining and synonym matching, using StatReviewer (a beta development itself), to create machine-produced ‘peer-review’ reports which would flag potential errors in the reporting and analysis of clinical trials. These machine-reviews would be produced in addition to the normal peer reviews, and clearly identified as coming from the computer check. The trial is limited to medical journals, as they have the most structured, standardised guidelines, and are potentially easiest to create the types of text-matching required for the project to succeed. The trial actually began on the day of the Future Pub meeting, and I await the results of this trial with considerable interest.
Connecting experts, by Joris van Rossum
Joris presented his ‘marketplace’ website Peerwith; a peer-to-peer author services platform in the mould of UpWork, providing opportunity for direct collaboration between authors, and service experts. Peerwith is integrated with the academic social network Mendeley, as well as Twitter with plans to extend to publishers; as a service option for their authors. A very intriguing platform. I checked out Peerwith almost immediately after the meeting, but found it somewhat limited in the ways I could format information about my services – but I am perhaps not quite the right person for their target audience at this point. The browsing features are somewhat awkward. I could not easily search for any service providers before submitting an official job request – which means, unfortunately, there is an erroneous test-request floating in the system, looking for “aaaa” with a budget of £100, that I had to provide before being able to see the people who I might want to contact. However, I did get a very speedy reply, asking whether this was a test or whether I needed more assistance, which is very positive. I would also like to see ratings and recommendations of service providers – though perhaps these appear once some have been provided by clients. The site is still in beta, and I believe there is potential for success with this, in the same was as UpWork and AirBnB have worked.
EDIT – I have been speaking with Ivo Verbeek, one of the co-founders, since writing the initial draft of this, and he has given me loads of very helpful information – so I can certainly vouch for their customer service and willingness to support their users.
Specifically, Ivo has told me:
"There is no browse / search functionality for profiles yet. High on our roadmap, and soon to be released, is find & hire functionality via expert pages. Read more here https://peerwith.zendesk.com/hc/en-us/categories/200886675-Our-roadmap."
Ivo also gave me a workaround for the lack of easily accessible profile.
“Log in, go to Network tab and “invite client”. Just invite yourself and you will receive a link. That is the link you can already use for existing clients to bring them to the Peerwith platform and request your service specifically.”
This works! You can request a service from me via Peerwith, and view my account info HERE
So if you consider setting up an account, this is currently how you can send links around to potential clients, in order to find your profile in-site.
Very helpful and speedy response!
Publishing Research Ideas and Outcomes, by Ross Mounce
I’ve been following Ross on Twitter for some time, and been waiting to hear him pretty soon after reading a few of his tweets (in the least stalkery sense possible!). His passion for a future of publishing the collected, connected, works of a single (continuous?) research cycle is tangible and infectious. He spoke about new platforms and concepts such as the Journal of Brief Ideas, ThinkLab and most prominently, Research Ideas and Outcomes – aka, RIO Journal.
RIO Journal is a platform which hosts all forms of manuscript, from PhD project plans, to workshop reports, grant proposal to posters, finished research paper to follow up studies – each with their own DOI. The motive being, to enable anyone to read and reference every stage of the research cycle, reducing the amount of ‘invisible’ work that researchers perform and making it accessible to an audience.
Someone from the audience asked the question that was on the tip of my tongue – are all related papers linked together? The answer was unfortunately, no, other than if they all have the same author – but I am sure a more immediately apparent linking system will be developed in future, that would allow readers to move directly through each paper in the cycle.
The question was also raised, of whether all papers related to one research project had to be submitted to RIO, and what happens if they are submitted across different journals. Especially the final article which may be submitted to a more traditional title. Mounce had a simple three letter answer for that – DOI. All papers in a particular cycle can be easily linked though DOIs, even if they are split across platforms and journals (and access types).
Another great idea that seems like it should have been established long ago, but is only now making a presence because technology has provided the means to make it an effective proposition.
See this video from RIO for an overview in their own words.
Citizen Science, Open Science & scientific publication, by Muki Haklay
Muki runs the Extreme Citizen Science group at UCL, heavily involved in novel large-scale methods of engaging the general population in data collection and scientific outreach - from crowdsourcing digital resources such as unused phones or computers while in sleep modes, to DIY science where people develop their own implements, analysis methods and conclusions. Muki spoke of the necessary relationship between and the contribution of crowd-sourced data and open access to published research, to foster motivation and feedback. Muki has posted the slides of his presentation to slideshare, and here they are:
And now you have read my assessment of the evening, you can watch it all yourself, as Digital Science videoed the whole event!