Bing translator reader
In accordance, we proposed to test the following hypotheses: These languages primarily include the official working languages of the European Union for which large bodies of translated parallel text is available. Experimental design To test these hypotheses we directly compared the quality of outputs from Google Translate a statistical translation engine , Yahoo Babelfish a traditional rule-based translation engine and Microsoft Bing Translator a hybrid statistical engine with language specific rules.
The input and output languages The length of the text given in characters Single sentence or phrase vs. From this data, the following conclusions can be drawn: For long passages of text up to characters, survey takers generally prefer Google Translate's results across the board. However, in several others like German, Italian, and Portuguese, Google holds only a very slim lead when compared to its closest competitor.
These observations validate our Hypothesis 1 that no single engine can perform equally well across a spectrum of languages or conditions. The greatest relative strength of a statistical translation focused engine Google Translate has not clustered around the European Union working languages as expected.
German, Italian, and Portuguese, all EU working languages are the most hotly contested from a performance perspective.
One possible explanation is that large additional bodies of parallel English-French text are available from the government of Canada for which official documents are translated into both.
To a lesser extent this could explain the strength of Google Translate in Spanish as many Latin American countries offer English translations of official documents. This data partially refutes Hypothesis 2. Traditional rule-based translation engines Babelfish performed generally well in East Asian languages such as Chinese and Korean.
One possible reason for this performance could be that the language specific grammar and word usage rules are more effective than association-based transliteration in these situations. These finding are in line with Hypothesis 3, but the size of the data set is not large enough to confirm the hypothesis in a statistical significant manner. Across almost every language Bing Translator and Yahoo Babelfish gain ground or surpass Google Translate as the text length gets shorter. Respectively, as phrases get shorter and more straightforward, rule-based or hybrid translation engines perform better.
Though data is not shown, a similar effect is seen for passages that are only one sentence compared to passages with multiple sentences. This data strongly validates Hypothesis 4. The most interesting observation is, that translation quality is not a two way street. The engine that is best for translating in one direction is not necessarily the best tool to translate back the other way.
The two most obvious cases of this are French and German. Though Google Translation dominates when translating both of these languages to English, it faces heavy competition when translating from English to the foreign language. Only brand bias can explain this increased preference for one tool. When you take this bias into account when viewing results in Table 1, many more languages pairing would be hotly contested or favoring Bing Translator or Babelfish.
Due to the size of the data set, we have chosen not to separate the data for further analysis. However in future experiments we will attempt to test this effect on a language-by-language basis.
Practical application With this data regular users of free online translation tools can customize their behavior according to the most effective tool for each situation. Systran is primarily a rule-based translation engine that has been developed to very high precision over the last 40 years.
In more recent years, Systran has blended its rule-based translation engine with a statistical translation engine to improve flexibility. This has resulted in a significant improvement in translation accuracy.
The drawback of the statistical approach is that it does not apply explicit grammatical rules, since its algorithms are based on statistical analysis rather than hard coded rule-based analysis The main benefit of the statistical approach is that rule-based translation systems require the manual development of linguistic rules, which is costly and does not carry over to other languages.
The Google and Bing machine translations use the past tense as though this was an event that actually took place. In contrast, the human translator uses a much more appropriate modal construction to express the obligation to do something.
How widespread are Google Translate accuracy issues? Instead they string bits of text together based purely on statistics. English homographs words with the same written form but different meanings number in the thousands. For example, James Hobbs identifies 2, homographs in common American usage, without being an exhaustive list in Homophones and Homographs: No doubt other languages are similar, so this is relatively commonplace.
Using Google Translate when quality is important is a huge gamble. Thus although a good percentage of Google and Bing translations are spot on, it is not uncommon for them to contain mistranslations and to give an inaccurate meaning. How can I know if a particular Google translation is OK? You could for example ask a target language reader to read through the free online translation.
And even if the objective is just for the reader to understand the gist of the source text, different readers will often interpret unclear or ambiguous text in different ways. Therefore any awkward or unnatural wording in the translation might lead that reader to a wrong interpretation of what the original text intended to say.
The only sure way to confirm any translation is of good quality is for someone who knows both languages to systematically and thoroughly compare the two texts.
When to use and not use Google Translate Translations by Google Translate are generally fine for getting the gist of a text. And at times their translations are really very good, on a par with what a professional human translator would create. However because of the methodology they use, they can also produce translations that are simply bad wrong — where words or phrases are translated incorrectly so the meaning conveyed is incorrect.
Their translations will also commonly contain grammatical mistakes and wording that is awkward and unnatural. Because of these issues, Google Translate should not be relied on when translation accuracy is required, or you want a translation with natural and elegant wording.
Your rule of thumb should be to always use human translators whenever a poorly worded or inaccurate incorrect translation could cause embarrassment or worse. Managing Director and owner of Pacific International Translations.
View all posts by: How accurate is Google Translate? This often perplexes people — why was it accurate last time, so inaccurate and clumsy this time? Often parts of the translation will be fine, and others poor. Occasionally their wording will be incoherent and just not make sense at all. Latest Posts Simple steps to great multilingual dtp outcomes How to create a translation invoice that will get paid on time Negative translation feedback — what an opportunity!