Live subtitlers: Who are they? A survey study
Isabelle S. Robert
University of
Antwerp, TricS research group
isabelle.robert@uantwerpen.be
https://orcid.org/0000-0002-8595-0691
Iris Schrijver
University
of Antwerp, TricS research group
iris.schrijver@uantwerpen.be
https://orcid.org/0000-0001-6091-024X
Ella Diels
University of
Antwerp, TricS research group
ella.diels@uantwerpen.be
https://orcid.org/0000-0001-6170-0252
Abstract
This article reports on the results of the first study in a larger research project on the profile of the interlingual live subtitler entitled ‘Interlingual Live Subtitling for Access’ (ILSA). Intralingual live subtitling is widely used in the industry and has attracted academic attention. Interlingual live subtitling, on the other hand, is still in its infancy. Although industrial demand is increasing, academic research is lagging behind. Moreover, a competence profile and a subsequent curriculum design are yet to be developed. ILSA wants to bridge this gap. It aims to describe the profile of the interlingual live subtitler and to develop, test and validate a training course for this new professional. This article reports on the initial stage of the project, which consists of a description of the current practice and training of intralingual and interlingual live subtitlers: Who are they and how have they been trained? To answer this question, surveys were disseminated among practitioners. The responses gathered from these surveys not only shed light on the current practices and training programmes; they also demonstrate that an all-encompassing training programme on interlingual live subtitling is lacking. This confirms the belief that projects such as ILSA are needed to support the training of future interlingual live subtitlers and to improve live subtitling in the future.
Keywords: interlingual live subtitling, professional practitioners, training programme, current practice
The Internet, audiovisual media and digital technology are transforming our world. However, their potential will not be realized until they become fully accessible and enable the participation of all citizens in all aspects of everyday life. Following the 2006 United Nations Convention on the Rights of Persons with Disabilities (United Nations, 2006), the EU Directive 2010/13/EU on audiovisual media services (European Union, 2010) emphasizes in its Article 46 that:
the right of persons with a disability and of the elderly to participate and be integrated in the social and cultural life of the Union is inextricably linked to the provision of accessible audiovisual media services. The means to achieve accessibility should include, but need not be limited to, sign language, subtitling, audio-description and easily understandable menu navigation. ( p. 6)
Consequently, audiovisual translation and media accessibility have become drivers of social inclusion and integration and have of late received full recognition both in the literature (Remael, Orero, & Carroll, 2012) and in EU-funded projects (DTV4ALL,[1] ACT,[2] ADLAB and ADLAB PRO,[3] HBB4ALL[4]).
In the area of subtitling for the deaf and hard of hearing, a key priority for the users has always been to access live content such as news and public events. According to Romero-Fresco (2019, several techniques can be used to produce live subtitles, such as fast typing or stenography, but nowadays the preferred technique is respeaking, a:
technique in which a respeaker listens to the original sound of a (live) programme or event and respeaks it, including punctuation marks [...], to a speech recognition software, which turns the recognized utterances into subtitles displayed on the screen with the shortest possible delay. (Romero-Fresco, 2011, p. 1)
In Europe, as Romero-Fresco (2019 explains further, the origins of respeaking are intertwined with those of live subtitling for hearing-impaired people. In the United Kingdom, for example, live subtitles were produced for the first time in the 1980s by ITV, using a standard keyboard and later Velotype; in 1991, the BBC created its own live subtitling unit and used keyboards in the first place and then hired stenotypists. In Flanders (Belgium), the public broadcaster VRT also experimented with fast typing in the 1980s. However, stenotyping appeared to be expensive, because of the extensive training needed to become a stenotypist. In the meantime, speech recognition (SR) had become successful and respeaking thus appeared as an alternative: it was first tested in 2001 by the BBC and introduced during the same period in Flanders by the VRT. In other European countries, such as Spain, France and Italy, respeaking was introduced some years later, mainly because of new legislation that sets subtitling quotas, even for programmes (Romero-Fresco, 2019.
Although respeaking was introduced in Europe as a profession in 2001, training at higher-education level did not start until 2007, which means that in the meantime companies had to train their own staff, a situation which led to different respeaking practices. Today, numerous courses and modules on subtitling and subtitling for the deaf and hard of hearing (SDH) are offered at European universities, but “respeaking courses at university level are still few and far between” (Romero-Fresco, 2019 p. 101). In addition, a new challenge has now emerged, as migration streams and the increased multilingual and multicultural composition of societies worldwide have led to a growing demand for access to live audiovisual content and events in a foreign language. Broadcasters such as the BBC and VRT and political institutions such as the Spanish Parliament have highlighted the need to find professionals who can produce not only intralingual live subtitles, but also interlingual live subtitles through respeaking. This new discipline will probably require translating, subtitling and simultaneous interpreting skills, although this remains to be investigated. Research on interlingual live subtitling is indeed only in its infancy, with just a few studies having been undertaken.
Only a handful of studies have examined ILS to date. An initial topic in this line of research focuses on quality assessment of interlingual live subtitling (Robert & Remael, 2017; Romero-Fresco & Pöchhacker, 2017). Another is the respeaking process itself: as part of the Respeaking Project (2014–2017) funded by the National Science Centre Poland, a team from the University of Warsaw conducted pioneering experimental studies using eye-tracking, electroencephalography and screen recording to understand the respeaking process and examine the competences required for this task. Although most of the work in this project dealt with intralingual respeaking, the final part was devoted to interlingual respeaking; and even though it focused mainly on cognitive load (CL), some results can inform training in interlingual respeaking. For example, the researchers observed that interlingual respeaking is perceived as more cognitively demanding than intralingual respeaking; although no prominent differences were found between interpreters and translators across all categories of CL,[5] interpreters reported having experienced lower CL in some categories, particularly in self-reported mental demand (Szarkowska, Krejtz, Dutka, & Pilipczuk, 2016).
Focusing on ear–voice span and pauses, Chmiel et al. (2017) concluded that interpreters are not necessarily better predisposed to becoming respeakers than translators. However, in a later study, Szarkowska, Krejtz, Dutka and Pilipczuk (2018) found that interpreters achieved higher-quality ratings in respeaking. An additional important finding is that a strong link was found between respeaking quality and working memory capacity (WMC): “People who have a high WMC performed consistently better as respeakers, regardless of whether they are interpreters or not” (Szarkowska et al., 2018, p. 223). In a related publication, Chmiel, Lijewska, Szarkowska and Dutka (2018) also concluded that there were no clear and straightforward advantages for any of the participant groups – that is, interpreters, translators and bilinguals – in paraphrasing for respeaking. This finding should “serve as an encouragement for anybody wishing to become a respeaker” (Chmiel et al., 2017, p. 741). Finally, Szarkowska, Dutka, Pilipczuk, and Krejtz (2017) found different indicators for “crisis points” in respeaking, such as the pace of the original dialogue, the number of speakers and overlapping speech, as well as numbers, proper names and complex syntactical structures. The authors state that as soon as one knows what the respeaking crisis points (RCPs) are, the strategies which can be adopted to deal with them can be investigated. The results of such investigations provide research-based evidence to “inform respeaker training in terms of strategies to deal with RCPs” (Szarkowska et al., 2017, p. 197).
These few studies all point in one direction: to be able to train interlingual live subtitlers (ILSer), a competence profile needs to be designed that can inform subsequent curriculum design and assessment parameters. This is precisely the aim of the Erasmus+ project ILSA (2017-1-ES01-KA203-037948; 2017–2020): to design, develop, test and validate the first training course for ILS and to provide a protocol for implementing this discipline in three real-life scenarios, namely on TV and at live events. To do so, different steps (called Intellectual Outputs in the project) have been outlined: (1) assessment of current intra- and interlingual live subtitling training and practice; (2) competence analysis for ILS; (3) profile definition and competences; (4) curriculum design; (5) development of training material; (6) quality assessment of the material, and (7) protocol for the implementation of ILS on TV and in social, educational and political settings.
This article reports on the very first step of the ILSA project, that is, obtaining an overview of the current landscape of intralingual and interlingual live subtitling training and practice as reported by intralingual live subtitlers and ILSers, that is, practitioners. In other words, we will answer the following question: Live subtitlers, who are they and how have they been trained?
In order to obtain an overview of the current training and practice of intralingual live subtitlers and ILSers, we designed a comprehensive online questionnaire in Qualtrics which was disseminated among practitioners in the spring of 2018. A questionnaire is “(1) a list of questions each with a range of answers; (2) a format that enables standardized, relatively structured data to be gathered about each of a (usually) large number of cases” (Matthews & Ross, 2010, p. 201). In this article, we use the term ’survey’ to describe the study design and ’questionnaire’ for the data-collection instrument used, in correspondence with Saldanha and O’Brien’s stance (2013).
According to the authors, questionnaires are popular research instruments because they allow for structured data to be collected on a large scale and should be less time-consuming than individual interviews. However, there are also some drawbacks. For example:
it is quite easy to get the design and administration of a questionnaire wrong [...] and although questionnaires are good for collecting exploratory data they are not the best instruments for collecting explanatory data [...] unless they are followed up by more in-depth interviews. (Saldanha & O’Brien, 2013, p. 152)
The authors also point to four types of error that can be associated with the survey research method. The first is the coverage error, when some part of the population is not included in the survey. In our case, the survey was disseminated by all four academic partners of the project (the universities of Vigo, Antwerp, Warsaw and Vienna) to more than 80 potential respondents, including practitioners and trainers, broadcasters and service providers, who were encouraged to disseminate the questionnaire further to potential respondents. The results that we present in the next section are based on 126 valid answers by practitioners, which is satisfactory. In addition, although the questionnaire was disseminated from four European countries, the respondents come from 27 different countries spread all over the world (e.g., Australia, Brazil, Canada, China, India, Iran, Malaysia, Korea and South Africa).
The second is the sampling error, when some parts of the population have a higher probability of being included in the survey. It would have been more likely that more Austrian, Belgian, Spanish and Polish practitioners were included in the survey. However, as we will show in the results, Austrian and Belgian practitioners are well represented but Spanish and Polish ones only to a lesser extent.
The third error type is the non-response error: members of the sample do not answer the questionnaire at all or answer only some questions. We cannot control for the former case, but we can for the latter: 163 respondents started filling in the questionnaire and as many as 126 questionnaires were generally filled in properly.
Finally, the fourth type is the measurement error, which occurs when the actual response differs from the “true” response. An example is the Hawthorne effect, “which occurs when people alter their normal behaviour because they are aware that they are being studied” (Saldanha & O’Brien, 2013, p. 31). Although we cannot exclude this risk, we do not think this applied to our questionnaire, since it was anonymous and online.
According to Saldanha and O’Brien (2013, p. 153), the questionnaire design is the most important stage of a survey study and clarity on the construct that is to be investigated is crucial. In our case, the questionnaire design was a collaborative undertaking between all four partners of the ILSA project, with different rounds of feedback and pilot testing in Qualtrics. Research on training in respeaking and intralingual live subtitling has also been consulted – for example, that of Arumí Ribas and Romero Fresco (2008), Remael and van der Veer (2006) and Romero-Fresco (2012).
The questionnaire consisted of three parts. The first part contained questions relating to age, gender, mother tongue, country of origin, country of residence, education and current function (questions 1–10). The second and third parts were dedicated to intralingual and interlingual live subtitling, respectively. After filling in the first part, the respondents could choose to answer both parts 2 and 3, if they worked or had worked as both intralingual live subtitlers and ILSers, or to answer only one part, if applicable. Parts 2 and 3 each consisted of 27 questions, which were similar for both intralingual and interlingual subtitling. The survey focused on the respondents’ professional practice (e.g., context of live subtitling, such as television or live events, years of experience) and in particular on their training: type of training (e.g., in-house, at university, vocational), course prerequisites, focus and structure of the course, mode of delivery and assessment. In addition, the respondents were asked whether they felt they had been well prepared, whether some competences covered in their course turned out to be superfluous to or missing in their professional practice, and how important formal training and/or practical experience in specific fields, such as consecutive interpreting or subtitling, was for successful live subtitling with speech recognition.
Different types of question had been formulated: closed questions (one answer or multiple answers, plus the possibility to add a comment; see the examples in Figures 1 and 2), open questions and Likert-scale questions (see the example in Figure 3). The full survey questionnaire will soon be available on the ILSA Project website.[6]
Figure 1 Example of a closed question with multiple answers.
Figure 2 Example of a closed question with a single answer.
Figure 3 Example of a Likert-scale question.
In this section, we present the results of the survey in several subsections: section 3.1 focuses on demographics and professional practice, section 3.2 on the type and timing of training in intralingual and interlingual live subtitling, section 3.3 on intralingual live subtitling training content, section 3.4 on interlingual live subtitling training content, section 3.5 on training perception and section 3.6 on the prerequisites for a successful career as an intralingual live subtitler or an ILSer. The results are based on 126 answers. Some results relate to specific groups: of the 126 respondents, all of them filled in the section on demographics, 96 filled in the questionnaire relating to intralingual live subtitling only, 9 the one relating to interlingual live subtitling only and 21 filled out the questionnaires for both intralingual and interlingual live subtitling. However, not all of the questions were answered by all of the respondents.
The results based on all 126 answers show that live subtitlers are generally young women: 70% of the participants are under 40 and 66% are female. The mean age of the participants working exclusively as intralingual live subtitler (N=96) is 36. Those doing both intralingual and interlingual live subtitling (N=21) are on average 33 years old. Those participants working exclusively as ILSer (N=9) appear to be older, with a mean age of 54. In the group of intralingual live subtitlers (which we will call the "intralingual live only group") we find a similar percentage of women (67%) compared to all the participants taken together; women are even more highly represented in the group of ILSers (78%) (which we will call the "interlingual live only group"), but less represented in what we call the "hybrid group" (i.e., intralingual and interlingual) (57%).
With respect to the country of origin, the participants come from a variety of countries, as explained in section 2.1 (see Figure 4). Besides this, 21% of the respondents do not reside in their country of origin. The proportion of participants pertaining to the three subtitling groups is approximately the same for each country.
Figure 4 Country of origin.
Regarding the mother tongue, as many as 28% of the participants have English as their mother tongue. Approximately the same proportion have German as their mother tongue (both first in the ranking). Dutch is spoken by 16%, 8% speak French and the remaining 20% speak other languages. The ranking is similar in the intralingual live only group, but different in the other two groups: in the interlingual live only group (N=9) there are almost as many languages represented as participants (Arabic, Dutch, English, Estonian, French, German, Italian and Swedish), whereas in the hybrid group Dutch is predominant (53%), before “other” (33%) and English (14%). This is probably due to the fact that Flanders (VRT) was a pioneer in respeaking for live subtitling.
As far as education is concerned, we can state that live subtitlers generally have a degree, with 33% holding a bachelor’s degree, 50% holding a master’s degree, and 3% even a PhD. Seven per cent mentioned “other” as their highest level of education and generally specified a post-graduate diploma or certificate. The remaining 7% have a secondary-school certificate. The same trends are observed when we look at each group separately, with an even higher number of master’s degrees: 67% in the interlingual live only group and 62% in the hybrid group.
Eighty-one per cent of the master’s degrees are language-related (N=51). Although the proportions are different in all three groups, language-related master’s degrees remain predominant, with 77% in the intralingual only group, 67% in the interlingual only group and even 100% in the hybrid group. Among the 51 participants with a language-related master’s degree, 47% have a master’s in Translation, 26% a master’s in Interpreting and 27% another language-related master’s degree, such as a master’s in Linguistics or Literature. The same trend applies to the intralingual live only group (with respectively 46%, 24% and 29% master’s degrees), but there are more interpreters in the hybrid group (39%) and the interlingual live only group (50%).
As far as the number and the type of “functions” (or duties) are concerned, we had hypothesized that the respondents would combine different functions, which proved to be true. Figure 5 gives an overview of the number and type of functions that the respondents carry out. Interestingly, the majority of those participants who provide intralingual or interlingual subtitling also have other professional engagements. This is even truer of those in the hybrid group.
Figure 5 Percentage of participants with one, two or more than two functions, per group.
The respondents could select different functions and since many of them combine two or more, the number of functions is much higher than the number of participants. In total, 345 functions have been selected. At 29%, the function of intralingual live subtitler has the highest score, followed by intralingual subtitler (23%). Interlingual live subtitling occupies fifth place, at 6%. All the details are illustrated in Figure 6.
Figure 6 Percentage of functions represented, whether combined or not.
When we look at the three different groups separately, we see a similar distribution in the intralingual live only group, although the function of ILSer in that group has the lowest score. This is not surprising: this group consists of people who have not answered the part of the questionnaire relating to interlingual live subtitling, so it is logical that they hardly ever act as ILSers and have decided to answer the questions relating to intralingual live only. In the interlingual live only group the ranking is different, with intralingual subtitler and translator sharing the first place among the other functions to be carried out, followed by interlingual live and teacher. Finally, in the hybrid group, intralingual live comes first, followed by interlingual live, intralingual subtitler and translator.
As stated before, of the 126 respondents, 96 filled in the questionnaire relating to intralingual live subtitling only, 21 for both intralingual and interlingual live subtitling, and 9 for interlingual live subtitling only. In other words, the results on intralingual live subtitling as a practice are based on 117 answers and those on interlingual live subtitling practice are based on 30 answers. Since some questions remained unanswered, we have decided to present both the values and the percentages in the following figures.
In an initial question, the respondents had to report whether they work for television, for live events or for both. They could also answer “other” and specify further. In both groups, television appears to be the main setting, as illustrated in figures 7 and 8.
Figure 7 Work context for intralingual live subtitling.[7]
Figure 8 Work context for interlingual live subtitling.
As mentioned before, many respondents combine at least two functions. The results show that intralingual live or interlingual live subtitling are in fact rarely full-time jobs (Figure 9). Only 12.4% of the intralingual live subtitlers work sporadically (less than 1 hour per week) as intralingual live subtitlers and only 17.1% work more than 20 hours per week in that function. A majority (70.5%) work between 1 and 20 hours a week as intralingual live subtitlers. For ILSers the results are different: 46.7% of them work sporadically as ILSers and almost as many (43.3%) work between 1 and 20 hours a week in that function. In other words, only 10% work more than 20 hours a week as ILSers.
Figure 9 Number of hours worked per week.
Finally, the respondents were also asked how many years they had been working as intralingual live or ILSer. Both groups have around six years of professional experience.
In this section, we concentrate on the type of training that the respondents received in intralingual live subtitling or interlingual live subtitling and when they were trained.
In the questionnaire, the respondents could select different types of training: self-taught, in-house training, training at a higher-education institution (HEI) and vocational training after graduation or other. To distinguish between these different types of training and because the respondents could select more than one type, the respondents have been allocated to one of the following four groups: in-house, HEI, vocational or a combination of training. Those respondents who selected only “self-taught” are not included in the analysis of the training, since no information about their training is available. However, all four groups include people who have selected “self-taught” in addition to a particular training type. For example, a respondent who selected “self-taught” and “HEI” is included in the HEI group.
As shown in figures 10 and 11, in-house training is widespread among both intralingual live subtitlers and ILSers, but much more so in the former than in the latter (76% versus 45%). There is also an appreciable difference regarding HEI training: only 6% in the intralingual live training group against 41% in the interlingual live training group. Vocational training is less frequent in both groups (4% and 9%), whereas combined training seems to be more frequent for intralingual than for interlingual live subtitling. However, these percentages have to be considered with caution, since the number of respondents for interlingual live subtitling remains low (22 answers in this case). As explained in section 1, respeaking training at a higher-education level did not start until 2007, which means that, in the meantime, companies had to train their own staff. This statement by Romero-Fresco (2019) is confirmed in the results shown here, as well as by the fact that respeaking courses at university level are still scarce.
Figure 10 Type of training for intralingual live subtitling.
Figure 11 Type of training for interlingual live subtitling.
The next question was related to the moment in their career when the respondents were trained in intralingual and/or interlingual live subtitling. We start with intralingual live subtitling. The respondents had different alternatives to choose from and were allowed to select different options: (1) before they started working as an intralingual live subtitler; (2) while already working as an intralingual live subtitler; (3) while already working as a subtitler. As shown in Figure 12, only a minority of the in-house training group were trained exclusively before they started working as an intralingual live subtitler (N=8; 12%). Many were trained while working as subtitlers (N=25; 37%) or while working as intralingual live subtitlers (N=10; 14%). The remaining 37% selected several options: 15% (N=10) were trained while working as a subtitler and as an intralingual live subtitler; 13% (N=9) were trained before working as an intralingual live subtitler and while working as a subtitler, and 9% (N=6) were trained both before and while working as an intralingual live subtitler. For the other training types, there are fewer respondents (Figure 13). The same wide variation can be observed as for in-house training.
Figure 12
Timing of the training in intralingual live subtitling, for in-house training.
Note: "IntraLS" means intralingual live subtitler.
Figure 13 Timing of the training in intralingual live subtitling, per training type (except in-house). Note: "IntraLS" means intralingual live subtitler.
As far as ILSers are concerned, a few answers were obtained (Figure 14). However, it seems that equal numbers of those who received an in-house training were trained either before or while working as an ILSer. In the HEI training group, people were trained either before or before and while working as an ILSer.
Figure 14
Timing of the training in interlingual live subtitling, per training type.
Note: "InterLS" means interlingual live subtitler.
The respondents were also asked to state when they completed their training. For intralingual live subtitling, 75% of the in-house training group and the same proportion of the vocational training group achieved their training after 2011; for the two other groups (HEI and combined), 100% completed the training after 2011. The same trend was observed for interlingual live subtitling: all the respondents (100%) achieved their training after 2011. Again, this confirms Romero-Fresco’s (2019) statement about the relatively recent offering of respeaking courses, especially at HEIs. Our results show that when courses are organized at HEIs, it is generally at master’s level. In the next section, we zoom in on training focus, structure, delivery mode and assessment.
Questions relating to training (e.g., focus, structure, delivery mode and assessment) were mainly open questions. The results are discussed for each type of training separately. Combined training is not discussed separately; the comments of the respondents who had combined training have been included in the respective section.
This section draws on the answers of 15 respondents. While only one respondent stated to have been trained at the bachelor’s level, all the others were trained at the master’s level. The master’s degrees in which the course was organized are generally those in Interpreting (N=6) or in Audiovisual Translation (N=5); the remaining courses belonged to degrees in Translation and/or Interpreting. The distribution between self-contained courses or courses belonging to a larger module is well-balanced (almost 50–50%). No aptitude test had to be taken before starting the course, but there were prerequisites as for every master’s degree, that is, to have completed a bachelor’s degree, preferably in Translation or a language-related discipline. Regarding the number of weeks of training they had received and how many contact hours per week, the results seem biased, with one practitioner reporting 52 weeks and 10 hours a week. This is probably the number of weeks of training for the whole master’s degree and the number of hours of audiovisual translation in his case. However, training time appears to vary considerably, from 1 hour per week during 8 weeks to 2 hours a week during 28 weeks.
As far as the focus of the course is concerned, 66% of the respondents report a strong focus on practice, with some theoretical introduction. The practical part generally consisted of software use and profile creation, dictation practice and then respeaking practice. The most frequent set-up for respeaking training is individual respeaking with self-correction (60%), but 40% of the respondents reported a combination of training set-ups: individual respeaking without correction, with self-correction or with parallel correction.
Modes of delivery are rather traditional, with 60% of the respondents reporting face-to-face lectures and/or seminars (workshops), 33% reporting face-to-face lectures and/or seminars (workshops) and an internship, and only 7% reporting online lectures. Finally, continuous assessment is predominant, either on its own or combined with a final exam. Final exams as the only means of assessment are not so frequent (20%). The respondents were also asked whether their accuracy rate was assessed and, if so, with which model. The accuracy rate was measured for 40% of the participants, but no particular model for measuring the accuracy rate was mentioned.
This section draws on the answers of 72 respondents, although not all the respondents answered all the questions relating to their in-house training. Contrary to training at HEIs, an aptitude test is commonplace for in-house training (39 out of 55 respondents, i.e., 70%). Almost all the respondents report having done a respeaking or a dictation test in addition to a language test. It has to be noted, however, that the aptitude test is sometimes part of a selection test, prior to employment. As far as prerequisites for the in-house training are concerned, only 22% of the respondents (N=38) said that there were no particular prerequisites and 7% said that they had to have passed the aptitude test or a selection test. The others report a variety of prerequisites, such as (1) having a degree (language-related bachelor’s or master’s; 26%); (2) having an advanced knowledge of the language as far as grammar, spelling and punctuation are concerned, and an excellent knowledge of current affairs, politics and sport (26%), or (3) already working as a subtitler or an intralingual subtitler (19%).
The in-house training generally focused on speech recognition, respeaking and subtitling and it was mainly practical, with a few respondents reporting a short theoretical introduction on respeaking. The duration and structure of the training vary considerably: 39% say that the training was on-the-job, without a real course or structure, that is, some kind of coaching by colleagues. For example, one respondent says: “I worked three days in a team with two colleagues, looking at what they were doing, and then I started to work in the normal program.” Another 13% report having had two to four days of training, which consisted of learning to work with the speech-recognition software (Dragon) and then practising with it. Two of them, for example, say the following:
Example 1: “Little theoretical introduction, only two days of practicing with SR Software and a tutor, the rest was learning on the job. In the beginning we had more hours of preparation ahead of live subtitling, so we could add words to the vocabulary and feed it with corrected documents.”
Example 2: “I think it was something like two days of introducing me to the program and such, theory. Then it was just practicing and perfecting. I was considered a respeaker after probably about a month of training. Before, I was just accompanying the respeakers and could take over for a shorter period of time - maybe 10 min. every hour or something like that.”
A large group of respondents (48%) report a longer period of training, from one intensive week to three months. Some respondents explain the structure of the course very clearly, as illustrated by the following examples:
Example 3: “The course was structured over six weeks. The first two covered basics of the SR software and intensive focus on improving re-speaking abilities. There were assessments every two weeks, determining whether trainees would proceed with the course. Weeks 3–4 consisted of improving editing and 'blocking' abilities, as well as gaining familiarity with associated software. Weeks 5–6 had a stronger focus on simulating live re-speaking and shift structures etc.”
Example 4: “I was trained during 8 weeks, but only 2 or 3 times a week:
· One theoretical introduction
· One session for the creation of my Dragon account
· Various sessions on sports respeaking (mainly training and some theoretical points)
· Various sessions on news programs
· One session on the weather information
· Two sessions on political programs.
Finally, three real live respeaking sessions with two other respeakers so that I had 20 minutes respeaking, 40 minutes break, 20 minutes respeaking, etc. The first real live respeaking was on tennis, the other two on football.”
Example 5: “10 weeks – week 1 theoretical introduction and familiarity with software, weeks 2–4 profile building with reading from articles, weeks 4–7 respeaking from videos and then 7–10 a mix of respeaking from videos/respeaking using captioning software.”
Example 6: “The in-house training lasted approx. 3 months. We spent approx. 1 week with a focus on dictation and developing voice models. From then on, we built up stamina in respeaking – moving from 5 minutes to 15-minute blocks of time. There was a lot of individual practice – mostly on news broadcasts, as this is where most respeakers first go live. Training was also given on how to prepare programmes (i.e., editing scripts, familiarity with processes associated with each programme, where to find information) and technical skills (developing voice models, building up vocabularies, macros and house styles). During this time, we also monitored our own accuracy and regularly assessed our own work.”
As in the HEI training group, the most frequent set-up for respeaking training is individual respeaking with self-correction (53%) and, here again, an important group (37%) of the respondents report a combination of training set-ups: individual respeaking without correction, with self-correction or with parallel correction. However, 8% also report a training set-up in individual respeaking with parallel correction, and even 2% report a training set-up without any correction.
As far as the mode of delivery is concerned, a vast majority of the respondents were trained in a either face-to-face setting (85%) or through a combination of face-to-face and online workshops (9%). Others were trained in an internship. Finally, continuous assessment is predominant (87%), either alone or combined with a final exam. Final tests as the only means of assessment are rare (2%). Others were evaluated through the internship. Again, the respondents were also asked whether their accuracy rate was measured and, if so, with which model. The accuracy rate was measured for 58% of the participants and of those 56% reported NER as the model used to measure their rate of accuracy.
This section draws on the answers of five respondents. Again, we cover the different topics relating to the training, as in the previous sections. Only two respondents had to take an aptitude test, one of them explaining that it consisted of speed typing. As far as prerequisites are concerned, the answers vary. Two respondents say “none”, but the others report “experience either in voice acting, a perfect pronunciation, or to be a translator”, “secondary education and at least three years of work experience no matter in what profession” or “good language skills, good working memory, good general knowledge, completed vocational training or a university degree”.
Regarding the focus and the structure of the course, three respondents were very brief, only mentioning two days of training that focused on “using a SR software, dictation, respeaking and intralingual editing”. One of them, however, describes his training in detail, explaining that it lasted several months and that each month one of the following aspects was trained specifically (Example 7); another respondent (Example 8) explains that the focus was diverse:
Example 7:
· “introduction to the professional field of speech-to-text interpreting/STTI (outline of profession, hearing impairment, code of professional ethics, role behaviour, etc.)
· techniques and strategies of interpreting
· mnemonics, research, terminology
· basics of sign language
· speech-to-text interpreting/STTI and live subtitling with speech recognition and keyboard (using Nuance Dragon Naturally Speaking, training for speed typing 10-finger system with shortcuts/hotkey system)
· live subtitling software for TV
· NER, quality management and assurance, STTI conventions
· on-site and online interpreting (online platforms, troubleshooting, technical equipment, editing and revising as Co-STTI working in a team, etc.)
· economic and legal foundations (freelancing, cost-bearer and funding agencies, etc.)
· simulations and exam preparation.”
Example 8: “there wasn’t one main focus: the course covered interpreting skills, deaf history and culture, respeaking, building and maintaining Dragon vocabulary for special subject areas, legal and business aspects of work as a speech to text interpreter, speech to text interpreting by typing, professional conduct as a speech to text interpreter. The course was taught over 9 months. Participants and teachers met for 2–4 days each month. In between, participants studied and practised at home. We first learnt interpreting skills and were then trained in respeaking.”
The modes of delivery were a combination of different modes, including internships, for the two respondents who have described the course in more detail (examples 7 and 8 above). The others report a combination of face-to-face lectures and workshops. As far as the respeaking set-up is concerned, again, we find a variety of answers, with two respondents being trained only for the individual respeaking without correction, one respondent reporting a training session on individual respeaking with self-correction, one respondent reporting a combination of individual respeaking with self-correction and parallel correction and, finally, one respondent reporting a combination of all three set-ups. This respondent is also the one who described his training thoroughly (Example 7 above).
Assessment modes are diverse, too: one final exam, one internship, one continuous assessment and two combinations of assessment modes. Finally, four respondents say that their accuracy rate was measured: three of them report NER, one cannot remember what was used.
The results relating to interlingual live subtitling training at HEIs are based on the responses of five respondents. We cover the same topics as in section 3.3.1. All the respondents received their training at master’s level, four of them during their master’s in Interpreting, one during his master’s in Translation. This is therefore very similar to the training in intralingual live subtitling. All the respondents but one answer that they were trained in a self-contained course. This result is different from that for intralingual live subtitling training, where the answers were more balanced. However, the present results are based on only five answers, versus 15 for intralingual live. As far as the aptitude test is concerned, the results are similar to those for intralingual live: only one respondent had to take an aptitude test, but he does not provide any details about the test. The same holds for prerequisites: four respondents say that a Bachelor in Applied Linguistics is needed. Another states that “training in interpreting would be best”, but that does not seem to constitute a formal prerequisite. The number of weeks of training and the number of hours per week varies. Therefore, the total number of contact hours varies, too: 24 (12´2), 36 (12´3), 52 (26´2), 56 (28´2) and 104 (26´4). The results are rather similar to those relating to intralingual live subtitling training. In other words, the training is spread over 1, 2 or 4 semesters, with 1, 2 or 4 hours of training a week.
All the respondents were rather concise about the focus and the structure of the course, providing no detailed description. In the one-semester courses (i.e., 24 and 36 contact hours), the main focus is on the practice of respeaking and on acquiring basic knowledge of the subtitling software, with one week being dedicated to the preparation of the profile for speech recognition. In the two-semester courses (i.e., 52 and 56 contact hours), two respondents report a theoretical introduction to live subtitling, followed by making a speech profile and respeaking practice, and one of them even paid two visits to broadcasting companies. The last one explains that the training was in a test phase, in collaboration with a broadcaster. Regarding the training set-up, two respondents report having undergone training in individual respeaking with self-correction, two others a combination of individual respeaking without and with self-correction, and one training in individual respeaking with parallel correction.
The modes of delivery are mixed: two respondents report face-to-face lectures and/or face-to-face seminars and workshops, one respondent reports a combination of face-to-face lectures and an internship, and one only online seminars. This is also in line with the results for intralingual live. Continuous assessment is reported by all the respondents, either as the only means of evaluation (N=2) or in combination with an exam (N=3). The accuracy rate was measured for two respondents, with no further mention of a specific accuracy rate model. However, since such a model has only recently been developed (Robert & Remael, 2017; Romero-Fresco & Pöchhacker, 2017), it cannot be applied yet.
This section is theoretically based on 11 respondents, since 11 people ticked off in-house training. However, very few answered all the questions, as will be shown below. Regarding aptitude tests, two respondents report that they had to take one, whereas this was not the case for four respondents. The others did not answer the question. Only two respondents answered the question about course prerequisites, with “Excellent knowledge of current affairs and sports, good language skills” and “Some kind of linguistic studies, being quick of apprehension, being able to multitask”. Regarding course focus and structure, only four answers were collected. Three respondents explain that it was not a real course. Here are examples of their comments:
Example 9: “It wasn't exactly a real course, I was kind of taught on the spot, by attending a few interlingual live subtitling sessions, performing one out of four of the possible tasks in the process and receiving a bit of explanation. I could already respeak, edit intralingually, use SR software, and I was already trained as an interpret[er]. The main difference with intralingual was our working method: how you only had one task to perform instead of all at once.”
Example 10: “getting to know the speech recognition Software, develop one’s respeaking skills, respeaking techniques, about 4 weeks of on-the-job training plus extra hours to build up the respeaking vocabulary needed for the different broadcasting programmes.”
Another respondent gives a clear idea of the course structure as follows:
Example 11: “The aim was to get used to the software, training in respeaking and general subtitling skills:
· Week 1: general introduction to respeaking and live subtitling
· Week 2–3: dictation and familiarity with software
· Week 4–5: fast dictation and interlingual respeaking, getting familiar with the programs to be subtitled
· Week 6: dry runs providing non-broadcast subtitles to specific TV programs.”
The training set-ups are individual respeaking with either self-correction (N=2) or parallel correction (N=2). Individual respeaking without correction was reported only once. Three respondents report face-to-face lectures and/or face-to-face seminars and workshops as the mode of delivery. Assessment was generally continuous (3 respondents out of 5) or a combination of continuous assessment and a final exam (N=1) or a final exam only (N=1). Their accuracy rate was not measured, except for one respondent mentioning NER, although NER does not apply to interlingual live subtitling.
Although three respondents ticked off vocational training, only one actually answered the questions relating to that type of training. The respondent in question says that he had to take an aptitude test consisting of typing, social knowledge and respeaking. The only prerequisite was a certificate of secondary education. The course consisted of six months of online and on-site training that focused on using the software and creating shortcuts, training in interpreting skills, and knowledge of the needs of hard-of-hearing people. Consequently, the mode of delivery was a combination of face-to-face and online lectures and seminars/workshops. The respondent was trained to respeak without correction of errors and was assessed through a combination of continuous assessment and a final exam. The accuracy rate was measured, but no details were given.
After focusing on the training itself, the questionnaire contained questions relating to the perception the respondents have of their training: (1) Do they feel that they were well prepared for their current practice?; (2) Do they feel that some of the competences that were trained turned out to be superfluous, thus unnecessary?; or (3) Do they feel that some competences were not addressed in the training, and were therefore missing? Again, we review the answers of the different groups separately, but, as in section 3.2, we report the results for intralingual and interlingual live together.
As shown in figures 15 and 16, the respondents are rather positive about their training (question 1), independently of the type of training they received. Only very few respondents (red bar in Figure 15) of the intralingual live training group answer that they did not find their training to be good preparation for their current practice.
Figure 15 Training considered good preparation for the current practice as intralingual live subtitler.
Figure 16 Training considered good preparation for the current practice as ILSer.
The question on whether or not they felt well prepared was an open question. We obtained the results above by recoding all their answers into “no”, “more or less” and “yes”. However, we now zoom in on some of their comments to shed light on the reasons why some respondents found that they were either well prepared for the job or not. Here are a few negative (12–14) and positive (15–18 comments from the in-house intralingual live subtitling training group:
Example 12: “This was definitely too little and just theoretical.”
Example 13: “No, it was too short. Too many new things in three days.”
Example 14: “No. You need lots of practice before you can produce decent live subtitles. A week is not enough. I was the last one to be trained this way, though; these days it’s more spread out and you get more time and more feedback.”
Example 15: “Yes, I was trained in-house while already working as a subtitler and learned live subtitling step by step. After practising with the speech recognition program, I started subtitling short and easy TV programs first. The advantage of gaining the practice on the job is being surrounded by very experienced colleagues who answer all your questions and help you a lot. You can also watch them during live subtitling.”
Example 16: “Yes, it was a good practice since it was very much hands-on only. This method kept me from overthinking the process of respeaking. I just did it continuously and got more used to it along the way, which in turn improved the quality of my output.”
Example 17: “Yes, it was a good training because I began with easy things to respeak and with no correction. Then I had to correct what Dragon was writing, but still on easy programs. When I was more or less comfortable with that, I began respeaking more difficult programs. Finally, I had these three sessions of live respeaking with two other respeakers and it was a good thing to do to be prepared for the real conditions.”
Example 18: “Yes, it was a very good preparation. I needed to learn how to live subtitle fast. It was very good to have a hands-on, very practical course. I learned how to work with the software, how to respeak and make corrections. Still, it is not enough to be able to live subtitle perfectly. Now, I'm already live subtitling. Of course, when technical difficulties occur, it is difficult to know how to response when it hasn't happened before. So during live subtitling, I'm still continuing to learn, practice and evolve. I think that to become a very good live subtitler you need lots and lots of practice.”
Question 2 was related to superfluous competences. As shown in Figures 17 and 18, except for a very few of them (red bar in figures 17 and 18), the respondents do not have the impression that the competences they were trained in turned out to be superfluous afterwards.
Figure 17 Superfluous competences in intralingual live training.
Figure 18 Superfluous competences in interlingual live training.
Of the five people reporting superfluous competences in the intralingual training group, only one comments on his answer: “I would say the historical theory of subtitling (as an introduction) was not necessary.” In the interlingual group, no details were given.
Question 3 was related to competences that were not addressed during training but which the respondents deem necessary. The opinions here are more two-sided, as shown in figures 19 and 20. However, again, caution is recommended, because of the low number of respondents for the interlingual live group.
Figure 19 Missing competences in intralingual live training.
Figure 20 Missing competences in interlingual live training.
In the in-house intralingual live training group, the respondents reported the following missing competences, skills and/or knowledge: a more detailed overview of the live subtitling software and its capabilities, information about software updates and latest developments, how to prevent errors, how to correct fast, how to split attention, and how to manage one’s voice. Here is an example of a comment:
Example 19: “I was taught the most important things. But at the beginning, I was not really taught which errors I should correct and how to deal with my vocabulary (which new words I need, for example). But later my colleagues taught me more specificities of respeaking. My colleagues also had to develop their skills. And I also learned new competences by myself.”
In the HEI training group, the respondents also refer to software training, but, in addition, two respondents mentioned “working with more than one interpreter at the same time” and “keeping the limited space of a subtitling in mind while respeaking”. Finally, in the vocational training group, one respondent mentions “knowledge and training of interlingual live subtitling, shortcut and hotkey systems, tools for terminology extraction and quality assurance regarding self-evaluation”.
In the interlingual live group, only the respondents from the HEI group wrote some comments. Basically, they seem to have missed training in subtitling software, stress control, typing, voice control and working with more than one subtitler at the same time.
The last two questions of the questionnaire were of the Likert-scale type (from 0 to 4; 0 being not important and 4 very important). The first question concerned the importance of prerequisites for successful intralingual or interlingual live subtitling with respeaking. The respondents had to rate the following prerequisites: formal training and/or experience in consecutive interpreting, simultaneous interpreting, subtitling and translation. The results in Figure 21 are based on 81 answers for intralingual live and 19 for interlingual live and they display the mean for each prerequisite.
Figure 21 Importance of prerequisites for successful intralingual (IntraLS) or interlingual (InterLS) live subtitling with respeaking (mean).
For intralingual live subtitling, we conducted a non-parametric test of comparison of four non-independent groups (Friedman’s ANOVA) and we found a significant initial effect, with X2(3)=96.56, p<.05. Consequently, we conducted additional tests of comparison of two related samples (post hoc tests) to discover where scores are significantly different. All the Wilcoxon Signed Ranks tests were significant, except the test for the difference between consecutive interpreting and translation. It has to be noted that the level of significance was equal to .008, that is, .05 divided by the number of comparisons, as recommended by Field (2009, p. 577). As a result, we can conclude that the respondents deem formal training and/or experience in subtitling to be significantly more important than one in simultaneous interpreting and that the latter is also significantly more important than training in consecutive interpreting. However, there is no difference between consecutive interpreting and translation.
We conducted the same analyses for interlingual live subtitling. The first test (Friedman’s ANOVA) was not significant, with X2(3)=2.17, p>.05, which means that the respondents do think that any formal training and/or experience in the four suggested disciplines is equally important.
The very last question was of the same type, with the following suggested prerequisite results shown in Figure 22:
· Ability to cope with turn-taking or overlapping dialogue
· Ability to multitask: listening while speaking, writing while reading
· Ability to select the essence of the source text and rephrase it into the same language/interpret it into the target language (TL)
· Accurate spelling, grammar and punctuation
· Awareness of the needs of the deaf and hearing impaired
· IT competences
· Knowledge of current affairs
· Knowledge of the rules and regulations of companies, e.g., style sheets and norms
· Perfect command of the source and target languages
· Speech recognition: Interaction with the software while respeaking (e.g., clear enunciation, staying calm, how to dictate, etc.)
· Speech recognition: technical aspects of the software prior to respeaking (e.g., terminology management, voice training, etc.)
Figure 22 Importance of specific prerequisites for successful intralingual or interlingual live subtitling with respeaking (mean).
For intralingual live, the first test (Friedman’s ANOVA) was significant, with X2(3)=63.95, p<.05. However, conducting pairwise tests would have been too complex because of the number of comparisons. We therefore concentrated on the first item on the ranking, that is, multitasking, and compared it to the second and to the third item. The Wilcoxon Signed Ranks test was not significant for the comparison between the first and second items (Z=–3.077, p=.001, thus >.0009), but the second comparison between the first and the third items was significant (Z=–4.184, p=.000029, thus <.0009). In other words, one could tentatively conclude that the ability to multitask seems to be considered significantly more important than the other abilities listed, except for the perfect command of the source and target languages. For interlingual live, the first test was not significant. In other words, no skill or ability in the listed items seems to be considered more important than any other for interlingual live subtitling.
This article is part of the Erasmus+ project ILSA, which aims to design, develop, test and validate the first training course for interlingual live subtitling. The purpose of this project is to make live content and live events fully accessible. Today, intralingual live content is largely being made available, but hearing-impaired persons often find that this is not the case for foreign live content. This means that the demands for interlingual live subtitles are increasing, but an all-encompassing training method, and the research necessary to develop such a method, is still lacking. ILSA hopes to bridge this gap. This article covers the first step in this study as it assesses the current landscape in intralingual and interlingual live subtitling training and practice. By disseminating questionnaires among practitioners, we have tried to answer the following questions: Who are live subtitlers and how have they been trained?
As far as the demographics is concerned, we can conclude that, today, live subtitlers seem to be highly educated young women and generally combine at least two functions, mainly that of intralingual live subtitler and intralingual subtitler. Regarding professional practice, we can say that a majority of the intralingual live subtitlers and ILSers work for television and that working for live events only is rare. In addition, both professional practices are part-time activities, with only 17% of intralingual live subtitlers working more than half-time in that function and even fewer (10%) ILSers working more than half-time in that function.
As stated before, the main focus of the survey was on the training that live subtitlers have received. The question was not only where and when they were trained, but also how: they were asked about the duration, the content and the structure of the course, the assessment and the use of a possible aptitude test. The majority of both groups of live subtitlers were trained in-house. However, there are quite some differences in training mode and format between the two groups of live subtitlers. Only one-quarter of the intralingual live subtitlers did not take part in an in-house training programme. In contrast, in-house courses account for only a little less than half of the respondents on interlingual live subtitling. For the ILSers, the share of HEI courses was much higher than for the intralingual live subtitlers. Nevertheless, we should not forget that the group of ILSers is rather small. This means that respeaking courses at the HEI level are still relatively rare.
Comparing the results of the different groups of respondents is not a simple task, given the often limited number of (usable) responses. Nevertheless, a number of tentative conclusions can be drawn. The participants who want to enrol in an intralingual and an interlingual HEI course are only rarely required to take aptitude tests. However, students do need to have a degree, for example a bachelor’s degree, before they can start the master’s programme of which the course is part. In contrast, aptitude tests are more common for in-house or vocational courses. Candidates are required to take tests, such as a respeaking or a language test, before they can start the course. With regard to the duration and the structure of the courses, many differences can be observed. Nonetheless, we can observe that the HEI courses tend to be longer and have a clearer structure. Some professional courses also last several months, but others take only a few days. In addition, companies appear to organise more on-the-job training, following a hands-on approach. In fact, most of the courses – both professional and HEI, both intralingual and interlingual – focus on practice, although some training programmes (especially HEIs) start with a brief theoretical introduction to live subtitling and the respeaking software before putting this knowledge into practice.
As for the training set-ups, there are no major differences between the different groups of respondents. Most courses focus primarily on individual respeaking with self-correction, although many also combine several training set-ups. These courses start, for example, with individual respeaking without self-correction before moving on to respeaking with self-correction and with parallel correction. In general, face-to-face modes of delivery are preferred over online methods in all types of training. Most courses combine face-to-face lectures with face-to-face seminars or workshops. Continuous evaluation is often used, sometimes in combination with a final exam. Final exams alone, on the other hand, are rare. The intralingual vocational courses are slightly different in this respect as they are more diverse, using final exams, continuous assessment and internships to evaluate participants. Accuracy rates are to a lesser extent measured in courses on interlingual live subtitling. This should not come as a surprise, as a model for measuring interlingual live subtitling has only recently been developed (Romero-Fresco & Pöchhacker, 2017).
Another important topic of the survey was the perception that the respondents have of their training: Did they feel prepared after their training? Which competences turned out to be superfluous and which competences were not addressed? In general, the respondents are rather positive about their training. Moreover, most of them do not feel that the competences dealt with in their course were superfluous. As for the competences not included, some would like to have paid more attention to the respeaking software and its capabilities, to working with more than one respeaker at the same time as well as to dealing with stress and voice control. Overall, there are no major differences between the different training groups.
A last topic in the survey concerned the importance of some prerequisites for successful live subtitling with respeaking. The intralingual live subtitlers considered formal training and/or experience in subtitling to be significantly more important than in simultaneous interpreting, which in turn was considered significantly more important than experience in consecutive interpreting. There was no difference between consecutive interpreting and translation. The ILSers, in contrast, attached equally great importance to all the suggested disciplines. Regarding more general skills and prerequisites, the ability to multitask and possessing a perfect command of the source (and target) languages were considered most important by both the intralingual live subtitlers and the ILSers.
When analysing the surveys, it is important to keep in mind the limited number of respondents of some of the training groups. Granted that this makes it more difficult to draw general conclusions, it also reveals the sheer shortage of training programmes on (interlingual) live subtitling today, which is the starting point of the ILSA project. These results also demonstrate that the training currently offered does not support the increasing demand for interlingual live subtitling. Moreover, they make clear that the few courses organised by professionals vary widely. In other words, an all-encompassing programme to train future ILSers has yet to be developed.
This article focused exclusively on practitioners. In the ILSA project, similar questionnaires were also disseminated among trainers who teach a course on live subtitling at HEIs and representatives of broadcasters and service providers who employ live subtitlers. The information gathered from these surveys will be presented and discussed in a later publication.
Arumí Ribas, M., & Romero Fresco, P. (2008). A practical proposal for the training of respeakers. JoSTrans, 10, 106–127.
Chmiel, A., Lijewska, A., Szarkowska, A., & Dutka, Ł. (2018). Paraphrasing in respeaking: Comparing linguistic competence of interpreters, translators and bilinguals. Perspectives, 26(5), 725–744. doi:10.1080/0907676X.2017.1394331
Chmiel, A., Szarkowska, A., Koržinek, D., Lijewska, A., Dutka, Ł., Brocki, Ł., & Marasek, K. (2017). Ear-voice span and pauses in intra- and interlingual respeaking: An exploratory study into temporal aspects of the respeaking process. Applied Psycholinguistics, 38(5), 1201–1227. doi:10.1017/S0142716417000108
European Union. (2010). Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services: Audiovisual Media Services Directive. Official Journal of the European Union: Legislation, 53, 1–24. doi:10.3000/17252555.L_2010.095.eng
Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London, England: SAGE.
Paas, F., Renkle, A., & Sweller., J. (2003). Cognitive load theoery and instructional design: Recent developments. Educztional Psychologist, 38(1), 1–4. doi:10.1207/S15326985EP3801_1
Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63–71. doi:10.1207/S15326985EP3801_8
Remael, A., & van der Veer, B. (2006). Real-time subtitling in Flanders: Needs and teaching. In C. Eugeni & G. Mack (Eds.), inTRAlinea Special Issue: Respeaking. Retrieved from http://www.intralinea.org/specials/article/Real-Time_Subtitling_in_Flanders_Needs_and_Teaching
Remael, A., Orero, P., & Carroll, M. (Eds.). (2012). Audiovisual translation and media accessibility at the crossroads: Media for all 3. Amsterdam, The Netherlands: Rodopi. doi:10.1163/9789401207812
Romero-Fresco, P. (2012). Respeaking in translator training curricula: Present and future prospects. The Interpreter and Translation Trainer, 6(1), 91–112. doi:10.1080/13556509.2012.10798831
Romero-Fresco, P. (2019. Respeaking: Subtitling through speech recognition. In L. Pérez-González (Ed.), The Routledge handbook of audiovisual translation (pp. 96–113). Abingdon, England: Routledge. doi:10.4324/9781315717166-7
Romero-Fresco, P., & Pöchhacker, F. (2017). Quality assessment in interlingual live subtitling: The NTR Model. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 16, 149–167.
Saldanha, G., & O’Brien, S. (2013). Research methodologies in translation studies. Manchester, England: St. Jerome. doi:10.4324/9781315760100
Szarkowska, A., Dutka, Ł., Pilipczuk, O., & Krejtz, K. (2017). Respeaking crisis points: An exploratory study into critical moments in the respeaking process. In M. Deckert (Ed.), Audiovisual translation: Research and use (2nd ed., pp. 179–201). Bern, Switzerland: Peter Lang. doi:10.3726/b11097
Szarkowska, A., Krejtz, K., Dutka, Ł., & Pilipczuk, O. (2016). Cognitive load in intralingual and interlingual respeaking: A preliminary study. Poznań Studies in Contemporary Linguistics, 52(2), 209–233. doi:10.1515/psicl-2016-0008
Szarkowska, A., Krejtz, K., Dutka, Ł., & Pilipczuk, O. (2018). Are interpreters better respeakers? The Interpreter and Translator Trainer, 12(2), 207–226. doi:10.1080/1750399X.2018.1465679
United Nations. (2006). Convention on the Rights of Persons with Disabilities [Report]. Retrieved from https://www.un.org/disabilities/documents/convention/convoptprot-e.pdf
[1] http://www.psp-dtv4all.org/
[2] http://pagines.uab.cat/act/
[3] http://www.adlabproject.eu/
[4] http://pagines.uab.cat/hbb4all/
[5] The authors use three measurable dimensions of CL: mental load, mental effort and performance. Drawing on Paas, Renkl and Sweller (2003), they explain that mental load is the expected cognitive capacity that will be needed for the task. Mental effort is the cognitive capacity that has actually been allocated to the task and performance is “an aspect of the cognitive load that shows the person’s achievements in carrying out the task” (Paas, Tuovinen, Tabbers, & Van Gerven, 2003, p. 64).
[6] http://www.ilsaproject.eu/project/
[7] Both the absolute figures and the percentages are included, since in Figure 8 the percentages are based on many fewer responses. The same reasoning is applied throughout this article.
[8] Comments from the questionnaires that are quoted in this article retain the original wording, including linguistic errors.