Unbabel: Editor Onboarding

Shandra Menendez
Maria Pires
Jérémy Bléhaut
Marina Sánchez Torrón

2020
Lead Product Designer

Unbabel is a Language Operations Platform providing human-quality translation of customer service tickets at a fraction of the cost and time. The product is fueled by AI, trained and quality controlled by professional translators and linguists. An essential component of Unbabel’s translation pipeline is the global community of translators who correct machine-translated text. These language communities ensure that Unbabel delivers human quality translations to their clients. However, the cost of these translators account for a high percentage of Unbabel’s overall operational cost.

Helping our Community Managers onboard new editors

Post-edition is different from translation. Post-edition requires the translator to look at the machine-translated text, and correct parts of the translation they deem to be incorrect. This process provides human-quality translations for a fraction of the cost of a standard human-translated text. Unbabel supports 30 different language pairs (LP's). The language pairs are divided between 6 Community Managers (CM's) who are responsible for growing that community according to the demand of work.

In order to recruit new editors into a language community, the applicant must create an account with Unbabel, pass an automatically evaluated language test (trial tasks) and pass a human-evaluated language test (training tasks). 

This process takes an editor approximately 5 days to complete, from initial sign-up to first paid task. There is a natural drop off in the acquisition funnel due to the screening process. However, as Unbabel started to scale, it was important for the CM’s to be able to grow their language communities quickly to meet the client’s demand for translations. Concurrently, the CM’s also received numerous complaints from professional translators who were unable to successfully pass the trial and training tasks.

The Community product team began a series of experiments to address problems with the acquisition funnel. Our objective was to speed up the acquisition funnel and decrease the number of tickets to CM's.

Mapping the editor’s journey

l started by documenting the editor's journey, gathering information from my team, previous team members, the CM's and any previous research conducted. This journey map breaks down the user types as they progress through their journey with Unbabel from applicant, rookie, trainee, editor to senior editor. It documents the editor and business pain points and opportunity areas.

After gathering our collective knowledge about the onboarding process into one document, I identified 4 problem areas, which led to several initiatives, of which I was responsible for decreasing the number of reports to CM's. The 4 problem areas were:

  1. The long turnaround time for onboarding editors was primarily due to the lengthy evaluation process. We believed that addressing this would have the biggest impact on speeding up the acquisition time.

  2. There is little guidance through the onboarding process about the differences between post-edition and translation. This didn't help to set the applicants up for success.

  3. The trial and training tasks were not accurately qualifying candidates, thereby negatively impacting the quality of the translations.

  4. The majority of tickets to CM's were related to unintentional sign-ups when a language pair was closed for applications.

Helping community managers grow language communities according to task supply

CM’s close a language pair when the language community is full, that is, when there are enough editors to deliver the volume of post-edition tasks within a given SLA for that language community. This means that when an applicant attempts to sign up for Unbabel, their language pair is unavailable and they cannot proceed with their application. CM’s reported that applicants would sign up anyway, using a language pair they don't have proficiency in, which contributes to the steep drop off in the acquisition funnel, as well as resulting in a larger volume of reports to the CM's.

We hypothesised that allowing all applicants to sign up, regardless of whether the language pair was open or not, would result in a decrease in reports from applicants, whilst also allowing CM's to build a database of applicants, ready to be contacted when a language community was required to grow quickly.

Applicants are able to sign up with Google, Paypal, Payoneer or via email. They are prompted to add the languages they know. By allowing applicants to complete the process, CM's are better equipped to grow a language community quickly. These designs were not implemented, as developer resources were directed to other initiatives. However, the CM’s implemented a similar signup process using a google form as a waiting list.

Guiding editors through the application process

This initiative was handed over to me to refine when I first joined the team. It was hypothesised that providing more context to the editors about what to expect during the onboarding process would result in a decrease in reports from applicants to the CM’s. I worked closely with the CM's to identify the most frequently asked questions about the onboarding flow. I included a progress bar to indicate where the applicant was in the process, the number of tasks to complete at each stage, how many attempts they are allowed and the expected time to complete.

This experiment was rolled out to 1 language pair, English–Japanese, as this community needed to expand quickly. Due to the legacy code that the platform was built on and a lack of back-end resources, this experiment took much longer to implement than initially anticipated. As no data had been collected prior to the experiment about the volume of emails CM's received regarding the onboarding process, it was impossible to assess whether this experiment was successful or not.

Clarifying the steps of the application process through the onboarding emails

I was approached by the product marketing team to redesign the email flow, which complemented the work done in the previous experiment. After an applicant signs up to Unbabel, they receive 9 emails that notify them of where they are in the process and what the next step is. I worked together with Raquel from product marketing, to redesign these emails to clarify each step of the onboarding journey, to build brand trust and excitement.

Leveraging the work done in the previous experiment, Raquel and I evaluated the current email flow and noted down information that was missing that could help make the applicant’s journey more transparent. We also provided tips to the editors about post-editing to set them up for success.  Raquel and I wrote the copy for the emails, and I designed a fresh email template.

Re-designing the language tests to accurately qualify language proficiency of our applicants

Given that CM’s were receiving complaints from professional translators who were unable to pass the onboarding process, there was a need to rethink the language tests in order to correctly qualify them. Our team concluded that there were two key areas that weren’t being addressed by the language tests:

Inaccurate assessment of language skills – it is very easy to pass the automatically evaluated test tasks as the difficulty level was low.

Lack of explanation about post-edition – the interface presents the original text on the left, and machine-translated text on the right. It is unclear which text the editor needs to correct.

The team's dedicated linguist, Marina, suggested that a natural language test could be a better approach to screening applicants for their language skills. A natural language test consists of a short sentence in the original language with 4 multiple choice translations. The applicant is required to select the correct translation. We believed that this test could replace the test tasks, whilst a guided, training-like experience that included post-editing tips could be designed to revamp the training tasks.

Marina developed a natural language test for English to Spanish, whilst I designed a mobile version for this language test. Most editors sign up for Unbabel through the desktop experience. Had there not been a lack of resources and difficult legacy code to refactor, implementing this experiment on desktop would have allowed more data to be gathered.

Training new editors to set in post-edition to increase quality and standardise approaches

Translators who join Unbabel are often unfamiliar with post-edition. Typically a translator receives a text in the original language and begins their translations from scratch. Post-edition requires the translator to refer to the text in the original language, the machine-translated text and then make any necessary corrections. Similar to the test tasks, applicants are given very little instruction for what is expected of them. So, I began by gathering post-edition tips from the CM’s, linguists and the Unbabel language guidelines.

I sketched out various concepts for this training experience and worked with linguist Jonas, to refine the experience. These designs were user-tested with 4 bilingual participants. Participants were given an interactive prototype and were asked to complete all five tests. The tests were timed and evaluated after the testing session. According to our success criteria, these designs succeeded in preparing the participants for post-edition, with the participants achieving an average score of 3.8.

Shifting focus due to internal restructuring

Unbabel restructured the organisation during the year of the pandemic. This resulted in teams being dissolved and the scope of our product team expanding. The Community R&D team were given a new product in Q2, which resulted in half of our team pivoting towards maintaining and improving with that product and the subsequent internal processes.

Many of the experiments were parked, as a result of the restructure. I believe that the most impactful initiative for our objective to speed the acquisition pipeline, would have been to address the lengthy human-evaluation process. However due to the size and scope of the work involved in redesigning this process, this initiative was deprioritised.