Social justice and translator training and education in a time of (non-)equitable tech
Renée Desjardins
Université de Saint-Boniface
https://orcid.org/0000-0001-9132-6361
Valérie Florentin
York University Glendon campus
https://orcid.org/0000-0002-2976-9628
Abstract
Social justice refers to the equitable distribution of resources, opportunities, and rights in society. Social justice frameworks acknowledge that structural inequalities can hinder accessibility to education and that the use of and recourse to technology is not always ethical or equitable. As translator trainers with more than two decades of experience in higher education, we reflect on the nexus between technology, translator training, ethics, and social justice, and put forward a list of strategies with which to humanize translator training or education and professional practice. We focus on Canada and draw from both a literature review using the Translation Studies Bibliography which shows that research on the subject of translator training or education and social justice is currently underdeveloped. We also draw from two media scans (conducted from November 2022 to October 2023) on the topics of higher education, translator training, social justice, and technological disruption and related digital divides, with a specific focus on machine translation and artificial intelligence. Along with other demolinguistic data from the 2021 Canadian Census, the review and scans contextualize the list of pedagogical recommendations and strategies we propose. We adopt the position that social justice should take precedence in the way we think about translator training and develop curricula rather than allowing market forces and the tech industry to determine training priorities and objectives. Artificial intelligence and other new(er) technologies can have pedagogical merit in higher education and in translator training, but it is imperative that we consider how and when to use the tools and to focus on issues that go beyond plagiarism and student surveillance. We therefore argue in favour of human, humane, and humanising translation in and beyond Canada and this means advocating and developing pedagogical strategies and curricula that align with the ethos of social justice.
Keywords: translator training, social justice, pedagogy, technology, digital divide
1. Introduction
In its broadest sense, technology is an intrinsic part of our lives and, as some would argue, it “is [also] a part of what makes us human” (Hare, 2022, p. 6). Increasingly, technology is also “frictionless”, meaning that it has permeated our lives in ways that we no longer overtly realize or think about – which is, arguably, not a bug but an intentional feature of most contemporary social technology, from social media apps to artificial intelligence (AI) chatbots. When technology is discussed in translator training or education, “technology” is usually taken to mean translation-specific or translation-adjacent technologies, that is, tools that are specifically designed to help with translation tasks (e.g., translation memory software or machine translation) or that translators use to work (e.g., hardware such as a laptop or a tablet; software such as word-processors). However, as the technological landscape continues to evolve, other technologies are now becoming an inherent part of the translator’s workflow. For instance, translators generally produce content that users will consume online and/or digitally, meaning that technology is used not only to translate content, but also to disseminate and publish content. It becomes part of the context. For instance, the translator who works as part of an in-house translation team producing captions for a brand’s Instagram page will not only use tools such as machine translation or project-management software; they will also have to use, be proficient in, and think about the social platform (here, Instagram) for which they are creating translated content. However, the literature on translator training and technology tends to focus on translation-specific technology and related issues rather than on aspects of the broader technological landscape. At the time of writing, most undergraduate translation programs in Canada also attest to this narrower understanding and approach.
Similarly, in Europe, the European Master’s in Translation Competence Framework (2022) indicates “Technology” as one of the five main areas of competence. But here, too, technology overlaps specifically with language services tools (software and hardware) rather than some of the larger technological contexts in which multilingual communication and language service provision take place. For instance, skills 15 to 20 of the “Technology” area of competence list tools such as “office software”, “corpus-based tools”, “multimedia files”, and “workflow management tools”. Another example is social media, which is mentioned only twice, under “professionalism” (skill 24) and as part of “networking skills” (skill 35), which ignores the fact that social media content can also be the result of myriad translation processes. In other words, social media platforms should not be taught peripherally with networking as the sole purpose, but as key contexts in contemporary professional practice. Furthermore, “data literacy” is included in the “Framework” (skill 19), but no clear definition is provided. This seems to suggest that all datasets require the same level of literacy when, in fact, different data can require different literacies and approaches to management (cf. Buolamwini, 2023; D’Ignazio & Klein, 2020; McDonough Dolmaya, 2024; O’Neil, 2017). In Canada, professional associations such as OTTIAQ (Ordre des traducteurs, terminologues et interprètes agréés du Québec [Québec Order of certified translators, terminologists and interpreters]) also list the requisite competencies and these are in turn the competencies Canadian translator training programmes use to inform curricula. Here, technological competencies relate to what “clients want” (“pour répondre aux besoins des clients”) and to ongoing professional development (“mise à niveau des technologies”) (OTTIAQ, 2015), but not to wider matters such as technological ethics.
Related to this, whereas ethical considerations are starting to gain traction in translator training, they are generally reduced to translation-specific matters (e.g., language data, translation quality) which do not adequately engage with broader matters such as access to higher education and to translation programmes (particularly for marginalized or lower-income populations), ineffective and inequitable language policies in academic institutions, technological divides, etc.
In this article, we take the position that the broader technological landscape needs to be dealt with at all levels of translator training, not exclusively in technology-focused courses, to include the social and ethical challenges that technological developments pose.
We present observational data collected from the Translation Studies Bibliography (TSB), and also from two recent media scans from mainstream Canadian media (national, provincial, and local coverage) and social media content. The literature review, media scans, and ensuing critical reflection allowed us to gain a better understanding of the debates and mainstream discourse on social justice and the uptick of technology (particularly AI) on Canadian campuses. We then examine current Canadian translation curricula critically to determine some of the ways in which contemporary training and pedagogy align or misalign with social justice principles. Finally, we propose a non-exhaustive list of strategies that seek to humanize translator training and professional translation, and integrate social justice perspectives. We are specifically interested in examining what makes translation uniquely human and humane compared to artificial or automated options and in emphasizing this in training.
2. Approach to data collection: Literature review and media scans
We queried the TSB to identify existing research on the topic of social justice and translator training or education from 2018 to early 2024. This, in addition to previous work on the subject of translator training and pedagogy during the pandemic (Desjardins & Florentin, 2022), confirmed that the intersection of social justice, translator training in Canada, and technological disruption, remains a relatively underdeveloped area. It remains so despite the pressing need for more reflection, guidance, and evidence-based strategies related to the increasingly widespread integration and use of AI and other technologies on campuses. We used TSB as this is one of the main databases of current Translation Studies (TS) scholarship and because it includes multilingual content (versus other databases that might be unilingual or bilingual), interdisciplinary studies and research, and has more extensive geographic representation.
Because discourse also shapes public and professional perceptions and opinions, we were also interested in examining the ways in which these topics were being discussed in mainstream arenas, including the press and social media. The media scans thus had a twofold purpose: (1) to obtain and save a record of mainstream reporting and discourse on the topic of AI and higher education in Canada and (2) to note the main discursive trends and sentiments among higher-education faculty, administrations, and students on these topics. These media scans situate our work – they are observational and contextual in nature.
We conducted two specific media scans (one on social justice, another on AI and digital divides), which included querying both the main Canadian current affairs databases (Canadian Major Dailies; Eureka; ProQuest – this includes English- and French-language content) and social media monitoring (in both official Canadian languages) using hashtag indexing and trending topics. The scans were conducted from the time the legacy version of ChatGPT (GPT 3.5) was launched (November 2022) to the early stages of writing this article (October 2023). It is worth noting that we began our work on the platform formerly known as Twitter (now X) and unfortunately we had to rework our methodology in the light of significant shifts in ownership and platform direction. Ultimately, the content we collected was still sufficient to obtain a general sense of some of the debates related to these topics. Some of the hashtags we monitored included #SocialJustice, #ungrading, #TranslationStudies, #DigitalDivides, #Translation, #TranslatorTraining. As our research is ongoing, the definitive number of articles and social media items included in our bibliography and corpus continues to grow (currently, there are more than 50 articles and social media items). The bilingual (EN or FR) articles included in our scans will be made available in open access (at a future date) on the Laboratoire d’intégration du numérique en enseignement de la traduction’s website (LINET, 2024). The LINET is a Canadian research group that focuses on studying technology and translation training and education, with a particular focus on creating and disseminating French-language material. Although the research group is located in Canada, international Francophone colleagues have also contributed, either in an advisory capacity or in creating content for the site.
3. Insights from the literature review and media scans
3.1 Technological disruption in higher education in Canada (Scan 1)
My colleagues wondered how one could tell whether a student used the AI to answer questions, and many were concerned with how it might enable plagiarism. What if a professor suspected a student had used ChatGPT but couldn’t prove it? (Steipe, 2023, para. 2).
The issue with hastily conflating the use of AI and academic dishonesty is that it can lead to policies that impede pedagogies based on trust. Rather than trusting that students are using this technology because they are curious and want to leverage it in ways that stimulate critical thinking or problem-solving, the assumption is that students want to cheat and to deceive. Related to this, a recent study showed that the very existence of generative AI might mean that professors now have higher expectations when grading, as they suspect recourse to AI when student output was “either very well-written or referred to irrelevant content and literature” (Farazouli et al., 2023, p. 10). We argue that this position (starting from a position of doubt or mistrust) runs counter to recent discussions in education about the importance of academic environments that promote trust and transparency.
However, it is impossible to ignore the fact that AI has been, is, and will continue to be used in deceptive and dishonest ways – not only among students, incidentally, but also by the larger academic and research community. On this latter point, a 2023 Nature survey shows that 25% of researcher respondents (n=1,600) used AI to help them write manuscripts and 15% used AI to assist them in grant writing (Parrilla, 2023; Van Noorden & Perkel, 2023). To suggest that AI is a “student problem” ignores the fact that faculty are also using this technology, which is similar to trends previously seen with Wikipedia, where some professors proscribed the use of the online encyclopedia among students, only to rely on it for their own work (this is documented on Wikipedia [Wikipedia, 2024] and also in anecdotal accounts on platforms like Reddit). The pace at which AI is disrupting various sectors – both academic and professional – is such that it is difficult to gauge the appropriate level(s) of technological integration and related strategies to mitigate different kinds of challenge. In some cases, universities are adopting an “if you can’t beat them, join them” approach, meaning that AI is being introduced without appropriate risk assessment and often without a sound student-centred rationale (cf. Gilson, 2023). In fact, policy surrounding the use of AI in Canadian universities appears generally to favour top-down approaches rather than including students in the decision-making process (ibid.). How, then, do we strike a balance between policy that advocates trust, the best interests of students and faculty, that is in step with rapidly evolving technology, and that is not hypocritical?
Some argue that the answer lies in the way assignments are designed and assessed. Elsayed (2023) suggests including assignment components that favour critical thinking and analysis as a way to circumvent the issue of students using AI without discernment. While this approach has merit, it cannot be systematically implemented, particularly in courses, such as core or foundational courses, where learning objectives and competencies include information retention and retrieval. Moreover, given that AI is likely to evolve rapidly and unpredictably from one semester to the next, let alone from one academic year to the next, both faculty and students will have to contend with ever-changing forms of assessment dictated by the tech sector. This seems rather antithetical to the tenets of academic and intellectual freedom. It also continues to place education and training at the mercy not only of economic markets – which is already an issue and a growing concern in Canada – but also at the whims of an industry known to “move fast and break things” (Zuckerberg, cited in Blodget, 2009) and for which the longer-term effects of AI, more specifically, have yet to be fully ascertained.
Elsewhere, studies have suggested that the long-term use of AI can curtail our human abilities and capacities for ideation, problem-solving, and critical thinking. Dahmani and Bohbot (2020) cite GPS as an example where, over time, over-reliance on GPS technology thwarts our internal (human) navigation system. Similarly, studies in neuro-ethics have suggested that devices which “extend the mind” may in fact invade the privacy of our thoughts, akin to a problematic mind-reader (Reiner & Nagel, 2017, p. 115).
Other reporting and research has noted that “free” technologies are seldom actually free. Many people now know their data is the trade-off for “free” access to software and apps. But other costs, such as safety and privacy, digital and online colonialism (Couldry & Meijias, 2019; Desjardins, 2022a), AI colonialism (Adams, 2021; Bommasani et al., 2021; Gentelet, 2023; Zuboff, 2019), the working conditions of the ghost labour that supports AI (Castaldo, 2023; Gray & Suri, 2019; Luccioni, 2023; Perrigo, 2023), the environmental costs of training the large language models that underpin AI (Luccioni, 2023; Luccioni et al., 2022), and the carbon footprint of the Internet more broadly (Batmunkh, 2022) may not be as obvious. Another important consideration is the fact that AI is largely premised on and perpetuates “artificial Western ethno-intelligence” (Williams & Shipley, 2020), a topic also raised by many critical and feminist scholars. D’Ignazio and Klein (2020), for instance, point out that computer programming and computer science were not particularly inclusive or diverse disciplines, which meant that the technologies which derived from these programs largely replicate(d) pre-existing hierarchies and stereotypes. Noble (2018, pp. 12–13) has shown that “algorithmic-driven decision-making” often obfuscates human and social contexts, which can and does have deleterious effects on and consequences for (notably, though not exclusively) marginalized groups (see also Appel and Weber, 2021). Noble (2018, p. 13) argues further that it is the responsibility of faculty to engage in interdisciplinary work – alongside activists, policy-makers, engineers, designers, etc. – to “blunt artificial intelligence decision-making” and re-centre “nuanced human decision-making”.
In some cases, Canadian universities are integrating these technologies with less consideration for these wider ethical implications, focusing instead on issues related to academic integrity, such as policing and sanctioning students who use AI. On Canadian campuses, the use of and recourse to AI must be contextualized in the light of the contemporary migratory and demolinguistic changes taking place in the country. Some students are turning to AI not necessarily because they lack the disciplinary skills to complete assignments, but because AI serves as a writing aid. Although the Government of Canada is revising and amending its federal language policy, the official languages remain English and French, and these are also the languages of most higher-education institutions in the country. For students with language profiles that do not include English or French as a first language or language most spoken at home, AI can be a helpful editor. However, this also puts this student demographic at greater risk: unlike their Anglophone or Francophone peers, students who do not have English or French as a language most spoken may rely more extensively on AI for language assistance, in turn increasing the potential for accusations of plagiarism or other forms of academic dishonesty.
Like Noble (2018), we take the position that top-down administrative approaches are antithetical to the call for a greater diversity of voices when technology is being deployed at scale and when the power dynamics are unequal for all groups engaging with the technology. Faculty wield power over the way students use AI in the classroom as well as in their work, and there can be a high degree of variability from one classroom to the next. For instance, one professor might integrate AI and impart adequate to comprehensive AI literacies, while another may choose to ban the use of AI outright. Students, then, are on the receiving end of mixed messaging, which can lead to issues for which the sanctions are consequential. Furthermore, when students are obligated to use AI without critical discussion and compromise, we ignore the fact that some are deliberately choosing to opt out for valid ideological reasons. For instance, some students have shared the view that trading their data for access to free tools such as social media platforms and apps is something they do not want to do (for an early discussion of this in TS, see Desjardins, 2011).
This scan shows that concerns about the use and implementation of AI in Canadian higher education are shared by faculty, students, and administrators alike. AI policy and implementation is more often informed by market economics (i.e., student employment and labour market needs) and, arguably, academic surveillance (i.e., academic integrity). The scan also shows the ways in which AI and related technologies intersect with larger social justice issues, though the link is rarely made explicitly. Therefore, we argue that explicitly using a social justice lens to examine how, when, and why to implement AI is a necessary approach, for the following reasons:
(1) Social justice seeks to dismantle power relations that unduly marginalize, disenfranchise, divide, and disembody, which are the very power relations that technology can often exacerbate.
(2) Academic institutions have the responsibility to decelerate mainstream and economic discourses that seek to push technological disruption without reflection; social justice approaches promote deceleration and usually intersect with the ethos of “slow” movements. Here, we are especially interested in the ways in which slow movements present in academic contexts (Berg & Seeber, 2016) and believe that this overlaps with social justice initiatives in higher education.
(3) Social justice frameworks more readily align with progressive and innovative pedagogical approaches that already critically examine how, when, and why technology should be and is deployed in education.
(4) A social justice lens encourages more nuanced debates that go beyond the binary of technology being good or bad. Social justice encourages “both … and” approaches where human and social factors are considered (cf. Noble, 2018).
3.2 Social justice and higher education in Canada (Scan 2)
Although social justice is not an inherently new concept, a number of key events that took place from the late 2010s onwards made it part of the zeitgeist. Social movements, which were active both offline and online, such as #IdleNoMore (indigenous movement), #MeToo (feminist movement), and #BlackLivesMatter (a movement that responds to racial inequity notably, though not exclusively) prompted various organizations and institutions to consider equity, diversity, and inclusion (EDI) in their organizational structures, practices, strategies, and policies. Academic institutions were among those who sought to implement EDI policies to remedy some of the issues these various social movements were highlighting. Bergquist and Pawlak (2008) note that institutional cultures in academia have historically been informed by social justice and EDI tenets. Therefore, although social justice and EDI have become more frequently used terms in recent years, their principles are not inherently new in academic contexts. Still, we note two key observations from this scan: (1) although EDI is discussed and implemented to various degrees in academic contexts, there seems to be an underlying assumption that higher education is in and of itself accessible; (2) few news items make the connection between higher education in Canada and the increased implementation of and recourse to technology and AI as these relate to social justice issues.
First key observation: education is often considered the “great equalizer” (Duncan, 2023). A university degree may not necessarily be a guarantee of upward mobility and better professional outcomes, but research generally shows that a high school diploma (or equivalent) usually affects professional and personal outcomes positively. Similarly, as Goedhart et al. (2022) note, access to technology usually presents more socio-economic opportunities compared to a lack of access. In both cases, a lack of access to education and/or technology deepens the divide(s) between the “haves” and “have-nots”, particularly at a time when governments have increasingly come to expect self-reliance from their constituents. Given these facts, rethinking curricula across the continuum (from elementary to secondary or high school) to ensure that education is accessible to all economic and social demographic groups, including lower-income, marginalized, and migratory groups (Fiel, 2020; Zhou, 2019), is paramount. For this reason, we are more inclined to use the acronym EDIA, for which “A” refers to accessibility. In many education contexts, equity, diversity, and inclusion are moot if accessibility is not an equal priority. Accessibility should not be limited to the existence of institutions and programmes; it should also include policy, financial, and social supports. Our argument is not that post-secondary education is not valuable, but we feel it is necessary to think about how “degree inflation” affects trends in education and various populations. Precisely because post-secondary education can be prohibitive (be it financial or temporal cost, or both), it is not inherently accessible. Moreover, discourse from employers and the marketplace have also contributed to “degree inflation”: jobs that did not previously require undergraduate or graduate degrees now do, often with no substantiated rationale other than prestige or a desire to triage candidates. This makes it especially difficult for some demographic groups to access employment because their education no longer suffices; in other words, professional upward mobility (or even sustained employment) is constrained unless there is further investment in education (Fuller & Raman, 2017). Moreover, if the price of upskilling is debt, there is little incentive for many to pursue post-secondary education. It is worth noting, however, that some universities have put in place programmes to select top-tier talent from marginalized or lower-income groups, which creates the illusion of accessibility as well as greater EDIA (Zhou, 2019); however, selection based on high performance is not the same as education that is accessible by default. Canada is a country that has a high number of holders of post-secondary degrees: according to Statistics Canada (2022), 57.5% of 25–64-year-olds have post-secondary degrees. Yet this percentage can be misleading as it does not indicate the barriers that may have prevented the remaining 42.5% from obtaining similar credentials, nor does it indicate which populations are underrepresented. To summarize: while Canada is a highly educated nation, education is not necessarily accessible to the extent that it should be, nor is it easily accessible to the specific demographics who may need training and credentialling the most.
Second key observation: the scan also shows that access to technology is also taken for granted, despite the fact that digital divides have been investigated previously in education research (Selwyn, 2011). Certainly, cost and financial means are factors: if students cannot afford technology (e.g., software, hardware, Internet access), this also has an effect on their educational outcomes. However, digital divides are not always the result of financial constraints: in fact, the factors that create or exacerbate digital divides are multi-layered. Geography, limited digital literacies, disabilities, age, gender, ideological stances, religion, race, etc., can all intersect with the uptake of and access to technology. Some academic administrators assume that because campus-wide Internet access is available, problems related to Internet access are circumvented; but this is not the case. For instance, during the height of the Covid-19 pandemic, students who could not access campus and who were geographically remote often faced greater challenges related to Internet accessibility, bandwidth, and reliability. Reimers (2022, p. 463) explains that students from developing countries notably experienced the most significant levels of social and educational inequality and the reduced effectiveness of some alternative modalities in education. Moreover, not all student groups are homogenous: some may be more technologically savvy due to personal interest and/or having easier access to technology, whereas others many not share a proclivity for technology and may not benefit from household access (e.g., big data plans or higher-speed Internet) or devices (e.g., tablets for each member of the household; personal laptops; most recent mobile devices). Technology, like education, can be an equalizing force, but it is important to recognize the ways in which it also creates, contributes to, and exacerbates divides (Gentelet, 2023; Morrell & Rowsell, 2020).
Access to technology is also intrinsically linked to language: it is general knowledge that English is the lingua franca not only of the Web, but of the tech world more broadly. D’Ignazio and Klein (2020) allude to the inherent Anglocentrism of many digital, computational, and online contexts. When technology is deployed at scale on most Canadian campuses, this fact is often abstracted: it is assumed that students can and will interact with English-language technology without experiencing any challenges. However, Canadian campuses located in minority language contexts are examples of where challenges can arise: some software is simply not available in French or in indigenous languages, which means that students are required to default to English. For some, this may pose little or no problem, but for others this is a linguistic barrier and therefore an obstacle to accessing education. This linguistic asymmetry is consequential as it affects the choice of tools that can be used and taught across the country. An argument could be made to say that Franco-minority or Indigenous-minority campuses can reasonably default to English in some situations to ensure that contemporary technologies are included in various programme curricula; but this runs counter to the very mandate of minority language institutions. Although machine translation and other tools may offer a mitigating solution, this nonetheless places the burden on non-Anglophone or English-as-additional-language students and adds a number of problems relating to machine translation into the equation – including the need to be machine translation “literate” (Bowker & Buitrago Ciro, 2019; Machine Translation Literacy Project, 2023). A social justice perspective acknowledges that language diversity is a strength in higher education and that policy on technology should be developed together with language diversity, equity, and accessibility from the outset.
Finally, in Canada, in some instances, the promotion of technology in higher education occasionally aligns with some political party platforms that advocate performance-based and market-driven programmes (Froese, 2022). Here, the use of and recourse to technology in university programmes has little to do with imparting critical literacies related to the online world; instead, the motivation is to ensure that students are competitive and ready to serve a (neo-liberal or capitalist) marketplace, which is antithetical to the idea that higher education should be – fundamentally – intellectually emancipatory (for more on the way education has shifted since the publication of the 1972 UN Faure report, see Biesta, 2012, 2022).
Although we underscore problems related to the accessibility of education and technology in higher-education contexts, we acknowledge that offline or in-person educational spaces still remain essential because they are not algorithmically determined. This means that the offline classroom remains one of the few spaces where shared humanity does not necessarily have to be mediated by digital technologies and online connectivity.
4. Technology, social justice and translator training in Canada
For translator training and education, technological disruption has consistently been a factor to consider, although recent developments have occurred at an unprecedented pace, notably since the advent of neural machine translation (NMT) in 2016. Like their colleagues in other disciplines, translation faculty in Canada also share concerns about the use of AI in training and student work (Bowker, 2023; Desjardins & Florentin, 2023; Poirier & Boucher, 2023). Some researchers and educators argue that machine translation literacy should be taught across academic disciplines, a point that, though not explicitly made, would fit within the European Digital Competence Framework for Citizens (cf. Vuorikari et al., 2022) or the Media and Information Literacy and Digital Competencies supported by UNESCO (cf. Grizzle et al., 2021). Some faculty argue that trainees should be able to translate without machine intervention, whatever form that may take (NMT, computer-assisted translation [CAT] tools or AI tools), underscoring the importance of linguistic competencies. Others argue that technology has changed the professional landscape and that although linguistic competencies are still important, technological skills and literacies should take precedence in training. However, focusing on these macro positions minimizes the equally valid meso- and micro-level questions to which we now turn.
Technology is indisputably part of the translator’s skillset, but we argue that it should be taught through the lens of social justice. For instance, the data that underpin AI are more voluminous in central languages, such as French, and hypercentral languages, such as English; and such data are usually of “better” quality in these dominant languages, despite the fact that they “erase language variation and mostly conform to the linguistic majority in their training data” (Bommasani et al., 2021, p. 25) and the fact that
systems are unintentionally exacerbating this imbalance due to their reliance on vast quantities of data derived mostly from English-language sources. Other languages lag far behind English in terms of digital presence and even the latter would benefit from greater support (Rehm & Way, 2023, p. 26).
This ties in to the fact that some languages are more technologically supported because they are more “economically viable”. For example, even in the European Union, where countries supposedly benefit from the same rights, languages are still not equally represented nor do they benefit from the same status: English, German, French and Spanish are more represented than other official languages or non-official languages used in the Union. Languages such as Catalan or Welsh, for example, are rarely represented and supported in the same way (Giagkou, 2022). Related to this, consider Canada’s current demolinguistic profile (Statistics Canada, 2023a; 2023b), where translation is most needed is not, in fact, between the country’s two official languages. Stated differently, there is a responsibility to train language expertise in Canada to ensure the equitable provision of translation and interpretation where AI fails to do so adequately, ethically, and equitably (for a discussion of the way machine translation and Canadian federal bilingualism were insufficient in the context of Covid-19 and public health communication, see Desjardins, 2022b).
There is also a responsibility among translator trainers and educators to consider how some cultures may refute machine intervention because translation is an inherent part of language revitalization and, for this reason, should remain a human activity. This may be the case in the context of Indigenous language revitalization, for instance, where Indigenous language speakers should retain their agency and right to self-determination. In other words, “outsiders” and machines should not necessarily impart Indigenous languages or extract from these language communities without permission or reciprocity (Bird & Yibarbuk, 2024; Nicholas & Bhatia, 2023). Using AI and NMT tools also widens further the gap between economically viable languages and more peripheral languages: translators and other language professionals should accordingly act as language custodians. McEntyre (2009, p. 1) argues that “caring for language is a moral issue”:
Like any other life-sustaining resource, language can be depleted, polluted, contaminated, eroded, and filled with artificial stimulants. Like any other resource, it needs the protection of those who recognize its value and commit themselves to good stewardship (McEntyre, 2009, p. 1).
Given that translators and language professionals know that technology can contribute to language erasure, there is a social justice impetus to combat this trend.
Canadian geography and infrastructure also determine the technology that translation students have access to. For schools of translation with more remote student populations and online programmes, geography and infrastructure very much dictate whether some people have access to high-speed Internet. Students located in northern Manitoba or northern Québec, for instance, may experience more challenges with accessing high-speed Internet than other students located in metropolitan areas. Data packages for mobile devices also vary widely across the Canadian provinces: usually, rates are cheaper in the Prairie provinces than they are in bigger markets like Ontario, and bigger providers (who generally provide a more reliable service) are more cost-prohibitive than smaller, local providers (who usually offer a more limited range of options).
Geography and infrastructure also create other technology-adjacent challenges: for students located in urban centres, the procurement of some software and hardware is much easier, given their proximity to various brick-and-mortar retailers and online retailer distribution centres (e.g., Amazon). Although it is true that online retailers and delivery services have bridged some divides, they have also created and exacerbated others. For instance, delivery and subscription costs may be more expensive the further a student finds themselves from a distribution centre, which costs offset potential “deals”. There are other ethical considerations, too, but here we limit our argument to the fact that while online retailers have made purchasing academic materials much easier and, to some extent, more affordable, it is still one of the trends that widens the gap between those who have access to such services and those who do not. Of course, these examples are not specific to translation programmes, but they have been raised in the context of recent disciplinary discussions, notably at the “TTR 35th anniversary conference” (TTR, 2024) in June 2024. Translator training that is equitable, diverse, inclusive, and accessible is training that considers the accessibility barriers related to geography, infrastructure, and, related to them, service provision or access and consumer trends.
We also note that the curriculum varies from one Canadian School of Translation to another. In our review of existing course descriptions and programmes among institutions recognized by the Canadian Translators, Terminologists, and Interpreters Council (CTTIC, 2023), programmes vary between translation technology courses. First, whereas most programmes have mandatory and non-mandatory technology courses, the number and the description of these varies. For example, the Université de Montréal (Québec) has four technology-related courses in its programme[*] whereas the Université de Saint-Boniface (Manitoba) and Université de Sherbrooke (Québec) respectively each have one.[†] We acknowledge that technology can be integrated into any translation course or programme and that academic freedom means that professors, trainers, and instructors have a degree of latitude in the ways they can (or do) integrate technology into their course syllabus. However, we argue that more consistent training or additional resource sharing across Canadian institutions would be a way of ensuring more equitable training nationally, especially given that smaller institutions may not have the same human, financial and infrastructure capital as larger institutions. Although it makes sense that there would be regional and institutional specificity across different schools and programmes, graduates of Canadian translation programmes should have similar baseline profiles and competencies, including technological skills, otherwise significant asymmetries will exist in graduate pools. This is undoubtedly a challenge, but it reinforces the necessity of breaking down institutional silos and implementing more collaborative national approaches.
Finally, shifting student demographics are a reality. Translation pedagogy informed by social justice should ideally take into consideration the fact that student and trainee groups are increasingly heterogeneous and that students can contribute meaningfully to both how and what we teach in translation programmes. We have already discussed the inclusion of other languages in training beyond the official languages, but it bears repeating: given contemporary migratory flows, more and more community translation and interpretation is needed in Canada and translation schools can play a significant role in responding to this need. However, with a narrow focus on official languages and the promotion of machine translation as the default solution to some of the translation challenges migration poses, we are doing a disservice to these populations. Social justice also prompts reflection on how to adapt training best for student cohorts who are looking to upskill or reskill and who are typically older (“second career”): undergraduates no longer comprise 18–25-year-olds exclusively. Andragogy, that is, the method and practice of teaching adult learners, should complement general pedagogical approaches, which also intersects with more inclusive and accessible translation training. Here, technology can actually assist trainers and educators positively: AI has been shown to assist educators by re-explaining concepts without the perception of impatience (Nerantzi et al., 2023; Rose, 2023; Sankaranarayanan, 2020; Toronto Metropolitan University, 2023; Trumbore, 2023). These tools can provide additional exercises or supplementary training material without imposing undue labour on educators who must already contend with numerous challenges related to post-pandemic teaching (e.g., adapting to hybrid instruction; increasing mental health problems among faculty and student groups; contending with Covid-19 when public health and institutional bodies are providing less guidance). AI can also be used in novel ways to engage cohorts of students who are (perhaps younger, though not necessarily) technologically savvy but who have specific learning needs: by helping faculty or trainers to design or support coursework inspired by the principles of universal design for learning (UDL) (CAST, 2023).
5. Humanizing and humane strategies that promote social justice in translator training in a time of technological disruption
Here we present four of the strategies we have used, and would suggest deploying, in translator training to redress some of the gaps identified in the literature review and the problems revealed by the media scans. This list of strategies is not meant to be exhaustive or prescriptive; instead, we hope it will encourage and inspire teaching colleagues to think about a pedagogy that is more equitable. As contributions in Blum (2020) note, more equitable pedagogical strategies tend to foster better learning outcomes. Some of our strategies may respond to the Canadian context more specifically, but we nonetheless contend that some of these challenges are shared, regardless of jurisdiction or institution, and can therefore be adapted as needed.
Strategy 1: Given Canada’s evolving demolinguistic profile and increased migratory flows on a world scale, schools of translation should have some language-agnostic training “baked into” curriculum design.
Although language-agnostic programmes may be the norm in some jurisdictions, in Canada, because of the Official Languages Act, this has not been the case historically. Most undergraduate programmes (which are the pipeline degree for most professional positions) were created within the framework of federal bilingualism. Change is happening at the graduate level, with Glendon’s Master in Translation Studies being an example of a language-agnostic programme, but we believe the language-agnostic model should also be implemented at the undergraduate level. As previously mentioned, some of the communities that were most affected by a lack of translation and interpreting services in recent years, and particularly during the peak years of the Covid-19 pandemic (2020 to early 2023), were those that regularly spoke languages other than French or English. We need to build translation and interpreting capacity and expertise in Canada, but continuing to do so in the two official languages alone does a disservice to the communities most in need of them.
Moreover, with an eye to the reconciliatory efforts since the official publication of the Truth and Reconciliation Commission’s Final Report (2015), translation training programmes that include training or support for Indigenous language translation, interpreting, and revitalization are a necessary step in the direction of linguistic justice and language inclusion and representation. Language-agnostic training would create an opportunity to include guest lecturers from these different communities and contexts and would provide some degree of malleability, meaning that without predetermining the languages of instruction, course materials could be adapted to reflect student cohorts and profiles in real time. We acknowledge this means that faculty will have to be adaptable but, here, technology can help in that it does make including external expertise more feasible and cost-efficient (e.g., Zoom, Teams). Language-agnostic training also makes translation programmes more marketable and attractive to student groups who wish to undergo training in translation and interpreting in Canada but who may not necessarily have English or French as their language combination. This is one instance where market-driven curriculum design overlaps with EDIA-informed objectives. Developing translation capacity and expertise in cohorts that speak languages other than French or English can also contribute meaningfully to terminological research: for instance, technology-related terminology is limited in some languages or such terminology has not been codified in vetted term banks such as Termium. Translator training in non-official languages could therefore contribute to developing better terminological capacity.
Strategy 2: Online and digital literacies beyond machine translation literacy should be integrated into translator training.
Recently, momentum has been growing to include not only translation technologies and post-editing competencies, but also machine translation literacy in Canadian translation programmes. Although programmes vary from one to the next, these competencies are part of the core curriculum. The Canadian-based Machine Translation Literacy Project also provides open-access content to both language specialists and the general public, supplementing programme-specific materials. However, translators are increasingly being called upon to work in online and digitized contexts where machine translation literacy and knowledge of language industry tools may be insufficient. While specialized courses such as localization, web translation, and audiovisual translation can cover wider-ranging online and digital competencies, these courses are not taught systematically. Such course offerings can also depend on finding an appropriate disciplinary expert to teach the material (unfortunately, some professors may not have the same currency as practising professionals and there are ethical issues related to hiring contingent faculty that become dependent on precarious contract renewals). Still, online and digital spaces are the loci of multilingual communication and we believe that an introductory course on digital and online media for translators could be a step in the right direction. This material could be modelled similarly to the Machine Translation Literacy Project, where, instead of a programme-specific or a university-specific course, faculty and practising professionals could come together and create open-access modules or content that is accessible to all Canadian schools of translation (or perhaps through a CTTIC portal). Such content could tackle topics such as social media, websites, applications, web-based and creator economies, and cybersecurity with a specific focus on the ways in which multilingual communication factors in to each of these contexts.
Strategy 3: Implementing “ungrading” and universal design for learning (UDL) principles into translator training in Canada.
As we have discussed, a number of factors affect the accessibility of education, both at a macro-level (higher education more broadly) and at a micro-level (discipline-specific accessibility). There was a time when entrance exams that assessed linguistic competencies in both official languages were standard practice (e.g., at the University of Ottawa and at the Université de Saint-Boniface). Recently, some schools have engaged in discussions to reconsider the role and validity of these exams, precisely in response to technological shifts. Removing entrance exams is one way of reducing the number of barriers that may deter students from specific cohorts from considering translation as a viable academic and career path. We also argue that translator training and course design could benefit from the integration of “ungrading” (Blum, 2020; Stommel, 2023) strategies and UDL principles (CAST, 2023). In Ungrading (Blum, 2020), researchers in various disciplines make the case, among various other calls, for reconsidering alphanumerical grading and punitive assessment (e.g., Gibbs, 2020). Alphanumerical scales are arbitrary and have little demonstrable value in the way of actual learning outcomes (cf. Pulfrey et al., 2011). In fact, intelligence tests (e.g., IQ tests) and other similar forms of standardized testing (e.g., the entrance exams for translation programmes) are “racist, classist, sexist, ableist, and more” (Gardner, 1983[2011]). The idea is that when we grade students, we inevitably rank them, which detracts from actual and effective learning. Alphanumerical assessments also intersect with some of the mental health issues that are increasing among student cohorts. Strategies such as iterative work and ongoing feedback have proven to be effective (cf. Blum, 2020).
Strategy 4: Teaching students how and why to leave “handprints”: encouraging human, humane, and humanising translation.
In his book Futureproof, tech reporter Kevin Roose (2021) provides “nine rules for Humans in the age of Automation”. Rule 4 is of particular note: “Leave handprints;” Roose deals with “workism” and “hustle culture”, that is, “the belief, common especially among type-A millennial over-archievers, that work is not just an economic necessity but the primary source of identity and meaning in our lives” (Roose, 2021, p. 120). He makes a connection between this lifestyle or mindset – one that incidentally favours “young, childless, able-bodied men” (Roose, 2021, p. 120) – and less equitable and humane workplaces. Unfortunately, as automation gains ground and AI continues to be refined, it is unlikely that we will be able to outwork and outperform machines. Still, we have agency in the extent to which we choose to leave our mark in the professional decisions we make and in the translations we produce.
Like Roose, we believe it is important to teach students the value of “leaving a distinctively human mark” (Roose, 2021, p. 121) in professional translations. In some cases, this may mean saying no to using automated solutions; this may mean raising awareness among users of translations about the actual cost (i.e., environmental, financial, personal, cultural costs) of using machine translation and AI; this may mean considering how to push back against constant acceleration and automation in the workplace. Roose states that “leaving handprints is about how humanely we work” (Roose, 2021, p. 125) and we believe that such an ethos should make its way into translator training, particularly because translation is a professional and creative praxis that intersects directly with EDIA principles.
As with the previous strategies, we do not have a prescriptive set of activities to suggest, but we believe that if we encourage humane and humanising ways of translating and thinking about translation, then we can continue to make a case for the added value of human translation in the face of increased automation. One activity we might propose to student groups is to invite them to reflect on the ways in which they think translation can be done in more humane and humanising ways and to share their observations as a group.
6. Conclusion
For this study, we conducted a literature review and two media scans to determine the ways in which higher education, technological disruption (and more specifically AI), social justice, and translator training have been discussed in mainstream and academic discourse and also in TS, particularly since the arrival and mainstream use of ChatGPT 3.5 in November 2022. We noted that while training and technology are recurrent topics in TS, social justice and translator training were largely abstracted from the discussion, as evidenced by the low article and chapter “hits” in the TSB (to be clear: research on social justice and translation exists, but this body of work usually connects to policy and practice, less so with training and education). This is not to say that some ethical considerations have not previously been flagged or raised, but these usually intersect more directly with translation-specific technologies (e.g., machine translation, CAT tools, corpora) rather than other macro-, meso-, and micro-level issues related to broader technological shifts. We note that access to higher education and technology is not a given in Canada and that this should be considered when we think of promoting more equity, diversity, and inclusion in translator training in this country and elsewhere. Some may opine that a call for greater equity, diversity, and inclusion is an “activist” position and one that may not be wholly objective or scientific. However, previous paradigms that have informed higher education and translator programmes are not without their own biases and issues – these paradigms were premised on a priori that were classist, racist, sexist, ableist and otherwise discriminatory. Therefore, we do not consider our position as activist, but simply … human. Succinctly, we argue in favour of translator training that is more humane and humanising, that is aligned to Canada’s evolving demolinguistic profile, and that truly serves the people who need professional language services and linguistic justice the most. Incidentally, this is a call that neither AI nor technology can wholly solve without human intervention.
References
Adams, R. (2021). Can artificial intelligence be decolonized? Interdisciplinary Science Reviews, 46(1–2), 176–197. https://doi.org/10.1080/03080188.2020.1840225
Appel, M., & Weber, S. (2021). Do mass mediated stereotypes harm members of negatively stereotyped groups?: A meta-analytical review on media-generated stereotype threat and stereotype lift. Communication Research, 48(2), 151–179. https://doi.org/10.1177/0093650217715543
Batmunkh, A. (2022). Carbon footprint of the most popular social media platforms. Sustainability, 14(4), 2195. https://doi.org/10.3390/su14042195
Berg, M., & Seeber, B. K. (2016). The slow professor: Challenging the culture of speed in the academy. University of Toronto Press.
Bergquist, W. H., & Pawlak K. (2008). Engaging the six cultures of the academy (revised and expanded edition). Jossey-Bass.
Biesta, G. (2012). Have lifelong learning and emancipation still something to say to each other? Studies in the Education of Adults, 44(1), 5–20. https://doi.org/10.1080/02660830.2012.11661620
Biesta, G. (2022). Reclaiming a future that has not yet been: The Faure report, UNESCO’s humanism and the need for the emancipation of education. International Review of Education, 68, 655–672. https://doi.org/10.1007/s11159-021-09921-x
Bird, S., & Yibarbuk, D. (2024). Centering the speech community. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, 1, 826–839.
Blodget, H. (2009, October 1). Mark Zuckerberg on innovation. Business Insider. https://www.businessinsider.com/mark-zuckerberg-innovation-2009-10?r=US&IR=T
Blum, S. D. (2020). UNgrading: Why rating students undermines learning (and what to do instead) (with a foreword by A. Kohn). West Virginia University Press.
Bommasani, R., Hudson, D. A., Altman, R., Arora, S., Bernstein, M. S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., … Liang, P. (2021). On the opportunities and risks of foundation models. https://doi.org/10.48550/arxiv.2108.07258
Bowker, L. (2023, October 6–7). Educating next gen translators – and others! – in the age of AI. FIT Next Generation Conference.
Bowker, L., & Buitrago Ciro, J. (2019). Machine translation and global research: Towards improved machine translation literacy in the scholarly community. Emerald. https://̣doi.org/10.1108/9781787567214
Buolamwini, J. (2023). Unmasking AI: My mission to protecting what is human in a world of machines. Penguin Random House.
CAST. (2023). About universal design for learning. https://www.cast.org/impact/universal-design-for-learning-udl
Castaldo, J. (2023, September 16). Meet the gig workers making AI smarter. Globe and Mail. https://www.theglobeandmail.com/business/article-ai-data-gig-workers/
Couldry, N., & Mejias, U. A. (2019). The cost of connection: How data is colonizing human life and appropriating it for capitalism. Stanford: Stanford University Press. https://doi.org/10.1515/9781503609754
Cowan, L. (2023). AI and the future of academic work/L’intelligence artificielle et l’avenir du travail académique. CAUT Bulletin, 70(5), 12–17.
CTTIC. (2023). Canadian Translators, Terminologists and Interpreters Canada. https://www.cttic.org/history/
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press. https://doi.org/10.7551/mitpress/11805.001.0001
Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10, 6310. https://doi.org/10.1038/s41598-020-62877-0
Desjardins, R. (2011); Facebook me!: Initial insights in favour of using social media as a tool for translator training. Linguistica Antverpiensia, New Series – Themes in Translation Studies, 10, 175–193. https://doi.org/10.52034/lanstts.v10i.283
Desjardins, R. (2022a). Translation and social media. In E. Bielsa (Ed.), The Routledge handbook of translation and media (pp. 415–431). Routledge. https://doi.org/10.4324/9781003221678-31
Desjardins, R. (2022b). Hello/Bonjour won’t cut it in a health crisis: An analysis of language policy and translation strategy across Manitoban websites and social media during COVID-19. In T. K. Lee & D. Wang (Eds.), Translation and social media communication in the age of the pandemic (pp. 78–97). Routledge. https://doi.org/10.4324/9781003183907-6
Desjardins, R., & Florentin, V. (2022). L’enseignement de la traduction au Canada: Réflexions à la suite de leçons tirées de la pandémie, de la justice sociale et des changements en milieux universitaires. TTR, 35(1), 203–230. https://doi.org/10.7202/1093026ar
Desjardins, R., & Florentin, V. (2023, October 6–7). Translator training and digital divides: Equitable strategies in a time of non-equitable (translation) tech. FIT Next Generation Conference.
Duncan, A. (2023). Education: The “great equalizer”. In Encyclopaedia Britannica Online. https://www.britannica.com/topic/Education-The-Great-Equalizer-2119678
Elsayed, S. (2023). Towards mitigating ChatGPT’s negative impact on education: Optimizing question design through Bloom’s taxonomy. https://doi.org/10.48550/arxiv.2304.08176
European Master’s in Translation Competence Framework (2022). https://commission.europa.eu/news/updated-version-emt-competence-framework-now-available-2022-10-21_en
Farazouli, A., Cerratto-Pargman, T., Bolander Laksov, K., & McGrath, C. (2023). Hello GPT! Goodbye home examination?: An exploratory study of AI chatbots impact on university teachers’ assessment practices. Assessment and Evaluation in Higher Education, 49(3), 363–375. https://doi.org/10.1080/02602938.2023.2241676
Fiel, J. E. (2020). Great equalizer or great selector?: Reconsidering education as a moderator of intergenerational transmissions. Sociology of Education, 93(4), 353–371. https://doi.org/10.1177/0038040720927886
Froese, I. (2022, November 8). Presidents of 2 Manitoba universities wary of funding higher education using certain outcomes. CBC News. https://www.cbc.ca/news/canada/manitoba/university-presidents-manitoba-performance-based-funding-1.6643810
Fuller, J., & Raman, M. (2017). Dismissed by degrees: How degree inflation is undermining U.S. competitiveness and hurting America’s middle class. Accenture, Grads of Life, Harvard Business School. https://www.hbs.edu/managing-the-future-of-work/Documents/dismissed-by-degrees.pdf
Gardner, H. ([1983]2011). Frames of the mind: The theory of multiple intelligences. Basic Books.
Gentelet, K. (2023). Les intelligences artificielles au prisme de la justice sociale/Considering artificial intelligence through the lens of social justice. Presses de l'Université Laval. https://doi.org/10.1515/9782766301850
Giagkou, M. (2022). Digital language equality in Europe: How are our languages doing? https://www.europarl.europa.eu/cmsdata/257076/Giagkou.pdf
Gibbs, L. (2020). Let’s talk about grading. In S. D. Blum (Ed.), UNgrading: Why rating students undermines learning (and what to do instead) (pp. 91–104). West Virginia University Press.
Gilson, P. J. (2023, September 13). Here are the rules for using AI at every major Canadian university, and yes, they vary. Narcity. https://www.narcity.com/rules-for-using-ai-major-canadian-university
Goedhart, N. S., Verdonk, P., & Dedding, C. (2022). “Never good enough”: A situated understanding of the impact of digitalization on citizens living in a low socioeconomic position. Policy and Internet, 14(4), 824–844. https://doi.org/10.1002/poi3.315
Government of Canada (2023). High school diploma and continuing education. https://www.canada.ca/en/services/benefits/education/diploma.html
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Houghton Mifflin Harcourt.
Grizzle, A., Wilson, C., Tuazon, R., Cheung, C.K., Lau, J., Fischer, R. , Gordon, D., Akyempong, K., Singh, J., Carr, P. R., Stewart, K. Tayie, S., Suraj, O., Jaakkola, M., Thésée, G., Gulston, C., Andzongo Menyeng, B. P., & Zibi Fama, P. A. (2021). Media and information literate citizens: Think critically, click wisely! (2nd ed.). https://unesdoc.unesco.org/ark:/48223/pf0000377068
Hare, S. (2022). Technology is not neutral: A short guide to technology ethics. London Publishing Partnership.
LINET. (2024). Laboratoire d’intégration du numérique en enseignement de la traduction. https://www.acet-linet.ca/
Luccioni, S., Viguier, S., & Ligozat, A. L. (2022). Estimating the carbon footprint of BLOOM, a 176B parameter language model. https://arxiv.org/abs/2211.02001
Luccioni, S. (2023, April 12). The mounting human and environmental costs of generative AI. Ars Technica. https://arstechnica.com/gadgets/2023/04/generative-ai-is-cool-but-lets-not-forget-its-human-and-environmental-costs/
Machine Translation Literacy Project (2023). https://sites.google.com/view/machinetranslationliteracy/
McEntyre, M. C. (2009). Caring for words in a culture of lies. William B. Eerdmans.
McDonough Dolmaya, J. (2024). Digital research methods for translation studies. Routledge. https://doi.org/10.4324/9781003083986
Morrell, E., & Rowsell, J. (Eds.). (2020). Stories from inequity to justice in literacy education: Confronting digital divides. Routledge. https://doi.org/10.4324/9780367031138
Murgu, C. (2023). Is your union strategizing about AI and automation?/Votre syndicat prépare-t-il une stratégie relative à l’IA et à l’automatisation? CAUT Bulletin, 70(5), 18–19.
Nerantzi, C., Abegglen, S., Karatsiori, M., & Martinez-Arboleda, A. (Eds.). (2023). Creative ideas to use AI in education. https://docs.google.com/presentation/d/1wVgLWgeEvJm3fznlm0aV8ZiuWsW3o3aUQUCcvuM5vxQ/edit#slide=id.g252f294a89d_43_0
Nicholas, G., & Bathia, A. (2023). Lost in translation: Large language models in non-English content analysis. Center for Democracy and Technology. https://cdt.org/insights/lost-in-translation-large-language-models-in-non-english-content-analysis/
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. https://doi.org/10.2307/j.ctt1pwt9w5
O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
OTTIAQ. (2015). Grille de compétences – traducteur agréé. https://ottiaq.org/app/uploads/2019/07/grilles_comptences_traduction.pdf
Parrilla, J. M. (2023, October 13). ChatGPT use shows that the grant-application system is broken. https://www.nature.com/articles/d41586-023-03238-5
Perrigo, B. (2023, January 18). Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Poirier, É., & Boucher, G. (2023, October 6–7). The vital importance of translation profession and training in the era of neural machine translation. FIT Next Generation Conference.
Pulfrey, C., Buch, C., & Butera, F. (2011). Why grades engender performance-avoidance goals: The mediating role of autonomous motivation. Journal of Educational Psychology, 103, 683–700. https://doi.org/10.1037/a0023911
Rehm, G,. & Way, A. (2023). European language equality: A strategic agenda for digital language equality. https://european-language-equality.eu/ele-book/
Reimers, F. M. (Ed.). (2022). Primary and secondary education during Covid-19: Disruptions to educational opportunity during a Pandemic. Springer. https://doi.org/10.1007/978-3-030-81500-4
Reiner, P. B., & Nagel, S. K. (2017). Technologies of the extended mind: Defining the issues. In J. Illes (Ed.), Neuroethics: Anticipating the future (pp. 108–122). Oxford University Press. https://doi.org/10.1093/oso/9780198786832.003.0006
Roose, K. (2021). Futureproof: 9 rules for humans in the age of automation. Random House.
Rose, R. (2023). ChatGPT in higher education: Artificial intelligence and its pedagogical value. UNF. https://unf.pressbooks.pub/chatgptinhighereducation/
Sankaranarayanan, S., Kandimalla, S. R., Hasan, S., An, H., Bogart, C., Murray, R. C., Hilton, M., Sakr, M., & Rosé, C. (2020). Agent-in-the-loop: Conversational agent support in service of reflection for learning during collaborative programming. In I. Ibert Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán (Eds.), Artificial Intelligence in education: 21st International Conference, AIED 2020. Lecture Notes in Computer Science (pp. 273–278). Springer. https://doi.org/10.1007/978-3-030-52240-7_50
Selwyn, N. (2011). Schools and schooling in the digital age: A critical analysis. Routledge. https://doi.org/10.4324/9780203840795
Statistics Canada (2022). Education: 2021 Census promotional material. https://www.statcan.gc.ca/en/census/census-engagement/community-supporter/education
Statistics Canada (2023a). English–French bilingualism in Canada: Recent trends after five decades of official bilingualism. https://www12.statcan.gc.ca/census-recensement/2021/as-sa/98-200-X/2021013/98-200-X2021013-eng.cfm
Statistics Canada (2023b). Multilingualism of Canadian households. https://www12.statcan.gc.ca/census-recensement/2021/as-sa/98-200-X/2021014/98-200-X2021014-eng.cfm
Steipe, B. (2023, March 3). My students are using ChatGPT to write papers and exams – and I support it. Macleans.ca. https://macleans.ca/society/technology/chatgpt-ai-university-students-professor-exams/
Stommel, J. (2023). Undoing the grade: Why we grade, and how to stop. Hybrid Pedagog.
TTR. (2024). https://www.mcgill.ca/litterature/fr/ttr-2024.
Toronto Metropolitan University. (2023, May 4). ProfBot, a new AI-powered chatbot, could be student’s next study-buddy. https://www.torontomu.ca/tedrogersschool/news-events/2023/05/ai-powered-chat-bot-study/
Trumbore, A. (2023, February 22). ChatGPT could be an effective and affordable tutor. The Conversation. https://theconversation.com/chatgpt-could-be-an-effective-and-affordable-tutor-198062
Truth and Reconciliation Commission of Canada. (2015). Honouring the truth, reconciling for the future. https://ehprnh2mwo3.exactdn.com/wp-content/uploads/2021/01/Executive_Summary_English_Web.pdf
Van Noorden, R. & Perkel, J. M. (2023, September 27). AI and science: What 1,600 researchers think. Nature. https://www.nature.com/articles/d41586-023-02980-0
Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The Digital Competence Framework for Citizens – With new examples of knowledge, skills and attitudes. https://publications.jrc.ec.europa.eu/repository/handle/JRC128415
Wikipedia. (2024). Reliability of Wikipedia. https://en.wikipedia.org/wiki/Reliability_of_Wikipedia
Williams, D. H., & Shipley, G.P . (2020). Enhancing artificial intelligence with indigenous wisdom. Open Journal of Philosophy, 11(1), 43–58. https://doi.org/10.4236/ojpp.2021.111005
Zhou, X. (2019). Equalization or selection?: Reassessing the “meritocratic power” of a college degree in intergenerational income mobility. American Sociological Review, 84(3), 459–485. https://doi.org/10.1177/0003122419844992
Zuboff, S. (2019). The age of surveillance capitalism: The fight for the future at the new frontier of power. Profile Books.
[*] Traduction professionnelle assistée par ordinateur; Outils informatiques des langagiers; Outils informatiques avancés; Traduction en technologie de l’information [Computer-assisted professional translation; Technology for language specialists; Advanced technology for language specialists; Translation and IT].
[†] USB: Informatique et traduction [Technology and translation]; Sherbrooke: Informatique fonctionnelle et traduction [Introduction to computing and technology in translation]. This is the information that was available on the respective institutional websites at the time of writing; please note that the translations into English are our own as there were not previously translated into English.