Those of us in the translation industry know that getting an ATA certification is the gold standard in the US. For years, there has been discussion about putting the certification test on a computer, where we all naturally do our translation work.
In early April, this year (2016), I had a chance to sit for this new computer-based test. Overall, it was a good experience. I feel that I’ve got as much of a fighting chance to pass as my peers who take the written test. Still, there are some important differences that I’ll mention in this post. Feel free to consider them, as long as you keep in mind that these are my own anecdotal experiences and insights – in no way promoting or representing any organization’s views.
Why did I choose to take the computer-based test?
I didn’t even consider the computer-based test until successfully passing the practice test and looking seriously at dates and places for the certification test. I had heard about it through my local ATA chapter, NCATA, but it became a real consideration by being one of the earliest locally scheduled tests in 2016.
Consideration meant spending some time thinking about the risks that might exist in taking any new-format test, and that I would have to go all the way to Charlotte, NC (I live in DC). It turns out that CATI (Carolina Association of Translators & Interpreters) was having their annual meeting the day before the test. Since I had met some new friends at last year’s ATA Conference in Miami, this provided a wonderful opportunity to reconnect with ‘old’ friends and take the test. 2 birds one stone.
The ‘newness’ factor was the real issue, because of two things. First, I thought about the possibility that this test might not be tested in an equivalent manner to the written version. In my language pair, English to Chinese, the passing rate is at a little more than 13%. Numbers like that make me nervous as it is, and the thought that the numbers might be different for the computer version compound that feeling. The other issue was the possibility of some sort of technical hiccup occurring before or during the exam. It helped that they had a backup plan, where a written test could be done in lieu of the computer version.
Were there any surprises during the test?
Some, but not really anything huge. All in all, it was pretty simple. Go in the room. Sit down. Listen to instructions. Work, Look at the clock. Work. Instead of ending the test by putting down your pencil you save your content. To be sure, there were things to do and deal with that don’t happen in my normal working environment.
Interface: The interface I worked with during the test was different in some ways to the kind of environment I normally work in on Trados, Word, etc. These issues could be somewhat tedious at times, in that it could take an extra few steps to write/save text. Still, there were not many of these types of issues and they were clearly explained in the orientation before the test began.
References: Allowable references include printed materials brought to the test (20lb dictionaries that I haven’t used since high school) and ‘non-interactive’ online resources. This meant that about 98% of my references were sourced online, with only a few select terms being sourced from the booster seat dictionaries.
Time: I thought that one clear advantage of using a QWERTY keyboard and computer interface would be that I could type faster than I write, that my hand and wrist wouldn’t fatigue, and that I could delete information quickly and easily. These all turned out to be true, but I still found the test to take the fully allotted time. I wonder, in contrast, how my translation technique would be different on the written version.
Tests of this ilk, and the professionals who take them, are in a rough spot. Tests need to have certain qualities (they should be comparable, measurable etc.), and so ATA has chosen to time their tests. Perhaps one of the biggest issues with this is the unit/quality problem. Translators don’t typically charge by the hour (time), but instead charge by the word. Quality should mean getting the right translation, not necessarily an expeditious translation. A better test, taken in a utopian world, would reflect this.
Still, the computer-based test seems to be a clear step in the right direction. As someone who, while taking the practice exam, relived painful childhood memories of writing out countless Chinese characters, I certainly appreciate the capabilities offered by the more familiar and comfortable keyboard on the certification exam.
Read about the authors