1. Thou shalt always provide final, locked picture and text
Revisions to texts are a fact of life for any writer. Texts get better as they are honed and crafted, carefully reworded to optimize the delivery of a message. And while this is certainly a normal and desirable part of the writing process, it becomes decidedly inconvenient to have to accommodate author’s alterations (AAs) when subtitling is involved and has already commenced.
Because proper subtitling always requires time code spotting, i.e. the assignment of a frame accurate time code “in” and “out” point for each individual subtitle, any change to the video—or the text—will require new time code spotting “downstream” from where the change has occurred.
Now, if say ten seconds of footage are summarily cut from the middle of the video and there are no other changes, then all the subtitling can be moved up by those ten seconds and the change is benign.
But if footage is added and/or deleted or new text or video that is longer or shorter is inserted in multiple places, then a detailed reworking of the time code spotting and the subtitling translation will be all but inevitable. And if the subtitling process has already moved to rendering and output of finished video, the whole process of inserting titles into the video and generating a final output will have to be undertaken again, at additional cost.
2. Thou shalt provide as much technical information about the picture as possible.
Information can only help, so the more, the better. Everyone from our technical staff to our translators and proofreaders needs a sense of the “canvas” on which your subtitling will appear.
Do you know the video’s frame size? It is imperative for us to know how much text can fit on screen, and consequently, which words will make the best translation.
Do you know the frame rate? This is the number of frames per second. Subtitling looks best when we can snap the titles to nearby picture edits. This is less demanding on your audience’s eyes and less obtrusive to your picture. Knowing the frame rate allows our subtitling accuracy down to an individual frame, not just a whole second, which many other translation companies ignore.
Do you know the video codec? Video data are massive, and codecs are the algorithms used to compress this data to manageable sizes, ideally at the least cost to picture quality. To make sure you get the best looking picture possible, our video experts will want to know the codec of your master video file.
Appreciating that this stuff is scarcely common knowledge, remember that we’re always happy to geek out with your creative or technical team in order to get these details sorted.
3. Thou shalt compose thy picture with subtitles in mind.
Especially today, as media dispersion and sharing become almost as vital and as demanding as media production, consider your global audience. If you have not yet shot your video, it pays to remember that your subtitling will appear in the lower third of your picture. Compose your shots accordingly! And place any on-screen graphics and any name titles and IDs where they will not conflict with your subtitles.
All too often an afterthought, subtitling makes your message accessible to millions. But if the subtitling collides with name titles and IDs or overlap with other onscreen text, the result can be less than pretty. Opaque or semitransparent banners will be needed as a background for the subtitling to be legible, thereby obscuring the underlying message and making it clear to your viewership that the subtitling was indeed an afterthought.
Take into consideration that other languages have accents (on top of capital letters) and descenders that will require more additional vertical line space (aka “leading”) than English would.
And remember that if there is already a lot of on-screen text to read, there just might not even be enough time to read the subtitles as well.
4. Thou shalt consider the full life cycle of thy picture.
So we’ve received your video, performed the translation, cross-checked with proofers and editors, fine-tuned the timing and the wording, performed final QA and sent your new video off into the world. But where will it be viewed? What kind of technology will be used? Who else might be able to alter it?
Predicting the future life of your video may be impossible, but there are considerations worth making. If your picture is a 16 x 9 rectangle, might someone else produce a center-cropped 4 x 3 dub that chops off the edges of your picture and thus your translations and graphics as well? Might your picture be screened on a small device or kiosk screen that covers up the edges?
Knowing even a little context helps us plan for your specific needs.
5. Thou shalt provide any already-approved proprietary translations.
Sometimes news is slow to travel. So if you are in charge of Creative or Production, but Marketing and Distribution forgot to mention that they already decided on a foreign language name for your new product, then our translators will inevitably come up with something else. Would you have guessed that a McDonald’s Quarter-Pounder is a “Royale with cheese” in France?
Proprietary names are impossible to guess correctly. Take a moment to review your transcript before subtitling begins and make a list of any proper names and proprietary terms. Chances are that a ready-made translation exists or that some precedent indicates how these terms should be handled in translation.
In many languages certain proprietary terms will not be translated, rather they will be transliterated. This applies to proper names in particular. Here it is not a matter of translation, but rather transliteration: the translators will need to know how a name is pronounced, so that they can render it phonetically in their native script. This applies to all languages that use a character set other than the Latin-based characters used in English.
6. Thou shalt know thy target country—not just thy target language.
Some languages cover a lot of ground and inevitably vary according to their turf. Your video may be for Spanish-speaking audiences, but who’s Spanish exactly? Spain, Cuba, Argentina, Mexico or one of the 18 other countries where Spanish is spoken? While subtitling is indifferent to pronunciation given that it is written text, you might think that a country specific localization may not matter. But consider that many Spanish speaking countries use different words for the same thing. A car is a “coche” only in Spain, and a “carro” only in Puerto Rico and the United States.
Another example is Portuguese. It is spoken both in Portugal and Brazil. While the accents are very different, there are also certain grammatical and syntactical differences that immediately clue a native speaker into the provenance of the text.
Knowing the target country is even more important for a language like Chinese. Consider that Mandarin is spoken in both the mainland (PRC) and Taiwan. While the mainland dialect is rather different and distinct from the Taiwanese dialect and vice versa, the character sets used are completely different. The mainland uses the simplified character set introduced by Chairman Mao as part of the Cultural Revolution, while the traditional character set is still used in Taiwan. And for political reasons, it is an extreme faux pas to use either in the local of the other. In fact, authorities on the mainland have even levied fines for broadcasts that used traditional Chinese on-screen.
Ask a helpful InterNation Project Manager to help you plan for your audience.
7. Thou shalt consider additional languages.
The subtitling process begins with timecode spotting, a process in which the text is “sliced” up for translation and the “in” and “out” points for each subtitle are determined. This slicing can be finer (more subtitles, each with fewer words) or coarser (fewer subtitles, each with more words) according to certain subtitling considerations. Obviously, the grammar and sentence logic matter most, followed by the pace of the speaker, the size of the screen and other details.
Another consideration is the gamut of target languages for subtitling translation. What we’ve found is that the same breakdown of parts typically fits members of the same linguistic families. Romance languages like Spanish, French, Italian and Portuguese can usually use the same “time map” for subtitles because their grammar and word roots are similar. So if your subtitling project only has need of these Romance languages, timecode spotting could be optimized for just for them.
However, if very different languages are on the order, for example Spanish, Russian, and Japanese, then timecode spotting has to be optimized for very different sensibilities. Russian especially tends towards fewer, substantially longer words than any Romance language. And while the Japanese Kanji characters derived from the Chinese are very compact and information dense, the phonetic Hiragana and Katakana are not at all, and all three of these character sets can and will be intermingled at will to produce smooth Japanese subtitling, making Japanese particularly prone to inflation.
So even if Spanish is the only language on your order for now, it is definitely worth mentioning any possibility, however slim, that your project may be localized into other languages as well, so the time code spotting can be reused.
8. Thou shalt check thy transcription.
We once had to translate a film professor’s lecture on independent cinema of the 70s. The film student who had produced the source transcript heard the sentence, “Soon everybody wanted to shoot like Cassavetes” as “Soon everybody wanted to shoot like a Cast of Eddies.”
Hilarious, but wrong. A spell check could not have caught that mondegreen, but a second set of proofreading eyes could have.
And do make sure the transcript is a verbatim accurate reflection of the spoken text in the video—with exceptions as per Do and Don’t # 9.
9. Thou shalt fix thy broken English.
Even if that’s how people talk, transcriptions of words like “gonna” and “wanna” do not necessarily exist in other languages. Including them in your transcription that will be used for subtitling is only likely to slow down our linguists and the review process, or lead to a very informal text that may not be appropriate for your message of the intended audience. Even if you are only interested in subtitling in English, consider too that these words look unkempt on screen and are likely to confuse your audience: is “gonna” used satirically, fondly, contemptuously?
Consider that grammar counts for more when being read than when being spoken. A news anchor might have actually said, “Here’s Scott and Pam with the story” without anyone’s ears taking note, but viewers’ eyes will expect to see “Here are Scott and Pam with the story.”
10. Thou shalt pay attention to character length restrictions during client review.
At InterNation, all subtitling scripts are generated in dual column table format with English text on the left and the corresponding translation on the right. Each subtitle can only be two lines long and each line is in its own cell, which defines how long the title can be. This is more accurate than saying a subtitle may only be, for example, 40 characters long, because “i”and “l” take up a lot less space than “W” and “M.” So the total number of characters can be more or less than 40 depending on which characters appear most often.
To save time and money and avoid rework, review and editing of a subtitle translation should always be done before the text is married to the picture. And it is very important to not exceed the hard and fast rule of only one line of text per table cell. Cheating is not permitted: you may not change the font size or increase the size of the table cell to accommodate more text!
1. Thou shalt not commit excessive fidelity in transcription.
While it’s true that a faithful transcription can capture the particular flavor of a voice, excessive fidelity to every quirk of the spoken text will never be helpful for subtitling translation. Efficient reading has to be paramount, because subtitle translations exist in time. So even if removing every “ah,” “um” and “uh” from your transcript loses a bit of fidelity, your viewers’ eyes will appreciate the economy and improved readability. Consider too that trimming the fat this way is always helpful in expediting the subtitling process, from timecode spotting through translation and final QA.
It’s also a good idea to cut away any false starts and any redundant text. False starts? They’re not… I mean… false starts are… well, they…. A false start to a sentence just includes more text to read that does not add anything to the overall message. In the world of subtitling, where the number of words that can be used is already limited due to the space constraints of the picture, effective communication is essential. And the redundancy created by false starts just detracts from the actual message.
Also, if the script accurately reflects broken English or flawed syntax or grammar, it is worth editing such passages into clean English that will translate well. Remember that translators do not translate words, they translate meaning and context. Let your speaker’s delivery and performance speak for themselves; the meaning is what should be in your subtitles: clear, lean, and coherent.
2. Thou shalt not change video formats mid-stream.
Subtitles are parsed into legible bites according to linguistic logic and grammar, convenient reading, considerations of the target languages, the speaker’s pace, the available screen space and the actual size of the font and the frame rate of the video. All the foregoing will determine how many characters can fit onto a given line.
This process is called time code spotting and it happens before the translation is ever started, so that the translators will know and can see how much space they have for their text.
If work begins in HD with a certain approved font and point size, then all considerations have to be scrapped if the picture size of the program is changed subsequently. The same can also happen if there is a request to increase the size of the font used, or if a different font is requested after the initial time code spotting. Some fonts are very efficient in terms of how much screen space they use up, and others are decidedly not.
Changing picture size or fonts and font sizes may mean that all the line and title breaks might need to be reset, which effectively means reworking the entire translation into the parameters of the new format, which inevitably leads to cost overages.
And it hopefully it goes without saying that subtitling created for a video with a frame rate of 23.98 fps will not transition to a video using 29.97 fps without an entire overhaul of the time code spotting.
3. Thou shalt not work in the dark (aka Thou shalt provide the video.)
It has happened more than once before that a client’s schedule required a subtitling translation be completed before the video was finished. This is an inherently bad idea for many reasons.
Subtitling is very much like typesetting, but with the added constraint of having to be sensitive to and reflect the timing of the pictures to which the text needs to be synchronized. In other words, subtitles exist in space and time.
Consider that a long sentence may not be spoken at an even steady pace, rather the speaker might slow down or speed up for dramatic effect. Without hearing the delivery in real time, the subtitler will have no idea how to parse the sentence and break it up for subtitling purposes. For example: should a given sentence be broken into subtitles of two lines, two lines, one line, and one line of text? Or should it be one line, two lines, two lines, and one line of text? On paper this may not make any difference at all, but once text is overlaid onto video, some versions the subtitling will not work as well as others or at all.
Consider also that while subtitles usually follow the spoken text, sometimes the timing will follow a picture edit to avoid a subtitle dangling over a jump cut for a few frames. Perhaps not a big deal you may say, but it’s the difference between subtitling being done with care and attention to detail rather than just slapping words on a moving image. And the difference is readily apparent.
But if subtitling must be done without the benefit of a video, be prepared for a workflow that is a bit like driving on a moonless night with no lights (and no night vision goggles). It is highly likely that at least some of the time code spotting and translation will have to be redone if good results are desired.
4. Thou shalt not attempt to shoehorn a manuscript style translation into subtitles.
On more than one occasion we have been approached by a client who wanted to subtitle a video using their own translation. In principle there is no problem with such a request, if the details of the workflow accommodate best practices.
The problem with a request like this is that the client is usually not familiar with how subtitle scripts are formatted according to line length restrictions. Frequently they produce a manuscript style translation and assume that subtitling is a mechanical process of chopping up the translation into subtitle-sized chunks. What is often overlooked is that the resulting translation will have grown or inflated, as translations are well known to do. A Spanish, French or Italian text will be about 25% longer than the English source text. A Russian text may have fewer words than the English, but contain about 30% more characters, which poses a variation on the same dilemma.
While it is possible to reduce the font size so that the longer translation text can fit on screen, invariably there will be just too much text to read and not enough time to read it. Viewers quickly tire from the speed reading required to process what we refer to as “text cramming,” a somewhat kinder expression than “bad subtitling.”
To make the client translation useable, one will have to edit the translation so that it is not longer than the English. Many clients are understandably reluctant to have their carefully crafted, elaborately worded translation pruned, but there is really no other option at this point.
When clients want to use their own translators, we’ve developed a workflow that can accommodate this, as long as the client’s translators play by certain rules. We create a time code spotted, dual column script in which the program has essentially been subtitled in English. Client translators can now see exactly how many characters/words they can fit onto each line. As long as they obey the line length restrictions and do not change the font, the font size or the size of the table cell, the resulting translation will be in sync with the picture, and it will be readable.
If you have any questions, contact a friendly, helpful and knowledgeable Project Manager at InterNation for a free consultation.
5. Thou shalt not consider subtitling and captioning the same thing
Subtitling is the process of “burning” text into the video. It can be done for any language which has a font that can be typed and any font that is available for a given language can be used, though some are decidedly better than others. Once the subtitles are married to the video, they are a part of the picture and can no longer be removed.
By contrast, closed captions, like DVD subtitles, can be turned on or off. Here the text is not a part of the picture, rather in the case of captions it is encoded into the video signal line 21, also known as the vertical blanking interval. Font choices are not available for captioning as with true subtitling. And while subtitles typically only reflect the spoken text, closed captions also address other sound effects and/or music for the hearing impaired.
But perhaps the most important distinction is that closed captioning cannot accommodate all languages. While the list of languages that are compatible with closed captioning is growing, many languages of limited diffusion are still not available as closed captions.
6. Thou shalt not confuse subtitling with on-screen text or graphics
As a rule, subtitles are one or two lines of text that reflect the spoken words of the voice track. They are customarily located in the title safe area at the bottom of the screen. They are typically white or a pale yellow and may be framed for better legibility by an opaque or semi-transparent box or banner, which can be present throughout the entire video or deployed only when necessary, e.g. when a white subtitle disappears into a white back ground or a business man’s white dress shirt.
By contrast, on-screen text, aka OST, can be any text that relates to the narration track or provides additional information or reiterates the spoken message for additional emphasis. In its most simple form, on screen text can be a list of bullet items that appear or “build” one after another. On-screen text is usually designed as graphics in Photoshop or Illustrator or in the video editing software itself. OST can be animated using sophisticated software such as Adobe After Effects to make text change colors, move through the picture, scroll, twist, spin, resize—the options are virtually endless.
These types of animation are not possible with subtitling software and you certainly would not use After Effects to create simple, static subtitles. While it is pretty easy to quote subtitles based on the run time of a video or the word count of a script, quoting localization of animated graphics is really not possible without seeing the images, and being able to review the program files that were used to generate the animated English version.
7. Thou shalt not insist on using subtitling when it is inappropriate
Subtitling is popular in particular for reasons of cost: It is always substantially cheaper to subtitle a video than to hire voice actors, a recording studio with an audio engineer and a dialog coach to record a voice-over.
That being said, there are situations when subtitling will not be in the best interest of the message, no matter how cost effective they may be. By their very nature, subtitles overlay and cover up the area of the picture where they are placed. Having subtitles in a training video that demonstrates important procedures will be counterproductive if the very procedure being illustrated is covered up with subtitling. Also, bear in mind that subtitling does “compete” with the video images for eye time, and time spent reading the subtitle will not be spent watching the video and vice versa.
Lastly, the educational level of the target audience can also be an issue. If reading skills are an issue, subtitling may not be the best choice.
8. Thou shalt not try to subtitle a video without a transcript
Many clients assume that a translator can and will translate directly from the video without the benefit of a transcript. The theory is that you play the video, listen to the audio and then type up your translation which is immediately overlaid onto the video. While this may seem very efficient, it is not a good solution for at least two very good reasons.
Accuracy: Reviewing and editing a written text in one language to an audio file in another language is far more difficult and time consuming than simply having written text in two languages side-by-side.
Validation: Many clients insist on a translation review as part of the workflow, and subtitling is no exception. The industry standard is to use word processing software (such as MS Word) that is capable of tracking changes, so that there is documentation of what has been deleted, added or changed that everyone can see. Different reviewers will track their changes with different colors, so that in the end there is a clear paper trail of who did what. Subtitling directly from the source video does not provide this important functionality that allows for a consensus of opinion between translators and reviewers, a crucial best practice that should always be followed.
9. Thou shalt not proceed to subtitling before the translation has been reviewed and approved
An efficient subtitling workflow dictates that a subtitle translation should be reviewed on paper before it is burned into the video. Correcting word choices or rearranging syntax at this stage of the workflow is done quickly and easily.
By contrast, if the translation is reviewed only after the subtitling has been completed, any text revisions will need to be loaded into the subtitling software, the subtitles will need to be re-exported to the video editing program and the video will have to be rendered and output again. Repeating these work steps takes time and adds unnecessary expense.
10. Thou shalt not forget about NTSC and PAL—yet.
In the “old” days of standard definition analog TV, different countries used different broadcast standards that were not compatible with each other. The United States and Canada used NTSC, which has a picture size of 720×486 pixels and a frame rate of 29.97 frames per second, while most European countries used the PAL standard, which has a picture size of 720 x 576 pixels and a frame rate of 25 fps. There was also a third standard called SECAM used mainly in France and Russia, which has been discontinued since the late 2000s.
Any video that was destined for a cathode ray TV had to be converted to the appropriate standard for the target country.
With the advent of high definition or HD TV there is no longer an issue of different picture sizes between countries, though there still a difference of frame rates that must be taken into consideration if the video will be viewed on a flat screen TV. But given that the majority of videos being subtitled are destined to be disseminated though the Internet, this is not necessarily an issue.
However, it is worth remembering that for the foreseeable future there will still be many NTSC and PAL playback devices, for example VCRs, analog camcorders and DVD players that are not HDMI enabled, and are still in use around the world. So it pays to be mindful of where and how your subtitled videos will be watched.