Elsevier
Researchfish
Researchfish is a desktop platform that helps research funders collect impact-related data to advocate research and inform funding strategies.
User dissatisfaction had materialised on Twitter which resulted in a controversial and reputational damaging Twitter exchange.
It was my job to start them on the road to redemption.

Overview
Researchfish has two distinct users
Funding organisations
These are Researchfish’s paying customers and have chosen the platform as their preferred way to capture their researcher’s outcomes.The researchers
Are not paying customers and are required by their funders to use the platform to submit their data and therefore have no alternative options.
Twitterstorm, a symptom of poor UX
Once a year Researchfish has a peak period for researcher submissions, the 2022 peak saw some critical tweets emerge from the researchers, complaining about a frustrating Researchfish submission process. The response from Researchfish was a public relations disaster!
Researchfish’s Twitter account messaged individual researchers saying, “We understand that you’re not keen on reporting on your funding through Researchfish but this seems quite harsh and inappropriate. We have shared our concerns with your funder.”
This was perceived by many researchers as a thinly veiled threat to have their funding cut off. Writing on Twitter, Brian Patton, a senior lecturer at the University of Strathclyde, described Researchfish’s response as “completely unacceptable corporate bullying”. Apologies were made by Researchfish, but the damage was done.
It was clear to me that the frustration of the non paying users was mirrored by the frustration of the Researchfish staff, who did not know how to improve things.
I saw the entire Twitter incident as a symptom of a UX problem and relished the chance to be the first UX designer to get to grips with resolving it.
Six months after the Twitterstorm, I joined Researchfish on a six-month contract to help improve the experience that had sparked the complaints, the researcher submission process.
I needed to quickly identify and implement low-risk, quick turnaround changes that could be in place before the next peak submission period in five months, whilst exploring more fundamental long-term changes to significantly enhance the user experience for researchers.
My role
I was initially the sole UX designer for the team and worked with:
Project Manager
Project Owner
Chief Analyst, (subject matter expert)
Developers
Various senior management within Researchfish and Elsevier
Towards the end of my contract, I was joined by another UX designer who had taken a position as a permanent employee.
I led multiple stages of the process including:
A series of low-risk, quick turnaround improvements
Review of new Material theme
Discovery kick-off workshops
Researcher survey
Building a database of volunteer researcher users for various UX research activities
Contextual Inquiry
Early stage ideation
Low-risk, quick turnaround improvements
The low-risk, quick improvements needed to be implemented before the upcoming peak submission period and included:
An improved side navigation
Warmer, shorter and easier-to-understand copy
Adding clear card headers in place of misused notification bars
A new banner offering information in an area of known confusion for researchers
A site-wide set of guidelines and rules for button usage, appearance and placement
Removal of extraneous and distracting content
A clear way to switch back to an ‘old’ theme
An improved footer that was clearly only linked to the page content
Dialog boxes explaining some key features to users
'My awards' page BEFORE my improvements
'My awards' page AFTER my improvements
An 'award detail' page BEFORE my improvements
An 'award detail' page AFTER my improvements, addressing several issues.
Further UI improvements for consideration
Once I’d reviewed the existing submission journey, I suggested some further changes that they may want to consider at a later date, these included:
Clearer page titles
Move to sentence case for buttons rather than caps (Interfolio’s design system is caps for buttons)
More consistent use of icons
Light blue page background to help cards become more visible
Tabs for ‘Awards I submit’ and ‘Awards I don’t submit’
Move page-related drop-down menus higher up the page.
Some small changes to their typography to bring a little more consistency in how it looks and to help items align.
A longer-term plan for the platform, based on evidence
Research fish needed to make longer-term plans, based on evidence rather than intuition. Gaining this insight was the single biggest contribution I would make during the contract.
Discovery kick-off workshop
I set up and ran a discovery workshop to ensure that as a team we were focusing on the right problems and asking the right questions. My aims for the team by the end of the workshop were:
Write a problem statement
Set discovery goals and objectives
Plan discovery and wider research activities
Define how we measure success for the redesign
Research plan
Our discovery kick-off workshop produced a set of actions for myself and the rest of the core team to follow, which I then captured in a Gantt chart and wrote up in Confluence.
Researcher survey
My calls for more quantitative data from our reacher users was well aligned with the thinking of senior management within Elsevier. Together, we wrote a survey to capture this quantitative data.
Survey results
With 1,936 responses, Reseachfish had achieved a Net Promoter Score of only -49 and a CSAT score of only 62%, a disappointing, but not unexpected result.
Many survey respondents found the platform unintuitive to use, with a confusing workflow and poor guidance, not knowing where to get the help they needed.
Another issue that ranked highly among respondents was the desire to know what the funders did with their data.
Question 6 answers
Heuristic Review of HMRC Self Assessment Journey
Conducting a heuristic review of the HMRC self-assessment journey illuminated striking similarities to the user experience of researchers engaging with Researchfish. Both involve mandatory data submission, demand precision, and often provoke procrastination. Yet, while the HMRC journey is lauded for its effectiveness, Researchfish falls short. Employing Jakob Nielsen’s 10 Usability Heuristics, I dissected the HMRC journey, gaining valuable insights into exemplary user-centric design. This evaluation shed light on effective design strategies and sparked early insights for our own redesign efforts..
User research Participant Library
Continuously requesting volunteers from all our users for every UX research activity would be impractical and ineffective, so I wanted to build a library of UX research volunteers.
We used a variety of channels, such as direct emails to institutions, a newsletter, and in-app messages, to explain that we were building a library of willing participants who could be contacted for UX research activities over an extended period.
By the end of my contract, the participant library had grown to include 1,418 willing volunteers, and it continued to grow.
Contextual inquiry
For the contextual inquiry, we observed 15 participants go through the submission process in their natural environment and then interviewed them to uncover hidden insights.
We made sure we had a mix of juniors and seniors, various disciplines and institutions to give us a full picture of the experience for the range of researchers on the platform.
What I did:
Participant recruitment
Set up a Calendly account for participants to book Zoom meetings
Wrote pre and post-submission journey questions
Created and managed Miro boards to capture the notes
Ran all 15 sessions over Zoom
Ran follow-up review sessions with the internal team after each session where I captured key takeaways
Reviewed recordings of all 15 sessions for evaluation
Consolidated three sets of notes taken by myself and two colleagues
Affinity mapping
I had asked my PM and new UX designer, to capture their notes on Miro post-its as I wanted to affinity map them in the following session. After collating all of our original notes into one group to avoid repetition, I colour-coded all 15 sets by participants.
During the affinity mapping process, we sorted hundreds of the collated notes into groups of common topics.
Once groups had been formed, we dot-voted, (but with cartoon animal heads), for our priorities.
I identified seven priorities, with 1. being a clear top.
1: Task priority/workflow
2: Confusion with multiple fields for search
3: Table headers
4: Does not know what funders do with data
5: Does not know which outcome to place data into
6: Does not know what to add
7: Terminology not understood
Task priority/workflow
The task priority/workflow was essentially users not knowing how to submit, go back or generally do the next thing they needed to do.
"Feels that you're going down rabbit holes all the time."
Participant 8
"I have no idea how to (submit)"
Participant 11
"It's not super obvious to me how to get back
to that submission bit. Do I to go back to the
start again? I'm going around in the loop now.
But I don't really know why."
Participant 12
Outcome
I presented all my findings from our discovery research to senior stakeholders on a 3-day on-site and as a group, we agreed that the task priority/workflow would be given priority as part of a wider strategic business goal to sell the product into other markets.
After the presentation, the product team, along with senior stakeholders, had a short workshop to map out a rough guide to how the new flow should work.
The roadmap for the future of the product was now based on qualitative and quantitative evidence and had buy-in from all senior stakeholders both within Researchfish, and the new parent company Elsevier.
During and after 2023’s peak submission period I monitored X (formally Twitter) and contacted my colleagues at Researchfish to evaluate the impact of the immediate low-risk improvements I’d introduced. It was clear that they resulted in a large reduction in direct complaints to Researchfish and a large reduction in negative feedback on social media.
Following the 2024 peak submission period, the net promoter score for the application increased by 47 points, from a woeful -49, to -2, and the CSAT score increased by 19% from a disappointing 62% to a respectable 81% showcasing a significant enhancement in user satisfaction and a massive step in the right direction.
Colin Millerchip, Researchfish Project Manager