You are viewing a single comment's thread from:

RE: Thoughts about the questionnaire and new proposals

in #witness-update5 years ago

Thank DaVinci, for this quick update! We’re definitely moving forward.

Q1

  1. Did the translator provide in the contribution post all the information needed to fully evaluate the translation? (For example, did he specify if he needed to research the definition of unfamiliar terms and which tools he used in the translation?)

Is this question really necessary? Are there really translators who consistently don’t put these info in their post and are they so many that we need to put this in the questionnaire that will be used for anyone henceforth.
Personally, I would consider a post without these info to be unreviwable and I’d have the translator edit it before even moving on to the translation itself.

Q2

D1 How would you rate the accuracy of the translated text?
D2 How would you rate the legibility of the translated text?

Do we really need both these questions? I would assume that an accurate translation is also a translation that is legible. A perfect word-for-word translation that doesn’t flow at all in the target language is not an accurate translation.

D6 On a 10 points scale how would you rate the difficulty of translating the text in this contribution(with 10 being the highest)?

I still stand by the previous suggestion to assign standard difficulty levels to all projects, rather than leave this to an LM’s perception. Especially on a 1-to-10 scale. This will end up punishing translators with stricter LMs even more than the current system does.

D7 How would you rate the internationalization efforts shown by the translator while translating this project?

I don’t understand this question at all.


I still stand by the suggestions I made in my post from yesterday. A good set of questions, for me, would be as follows:

  • Does this post present the work done in a personal, engaging, or anyway outstanding format? Yes / No
  • How would you rate the grammar, syntax, and overall linguistic level of this post? Good / Poor
  • What was the volume of the translation outlined in this post (excluding duplicate strings and non translatable words)? Whatever breakdown is feasible as desirable
  • What project was this post about? List of all available projects
  • Was the translation outlined in this post significantly more difficult than the rest of the project? Yes / No
  • How do you rate the overall accuracy of the translated text? Excellent / Very good / Good / Poor
  • How many major mistakes were found in the translated text? (that can be change the meaning of the text) Scale of 1 to 5 every 1000 words
  • How many minor mistakes were found in the translated text? (that do not change the meaning of the text) Scale of 1 to 10 every 1000 words

Personally, I agree with some previous comments made (I believe by @scienceangel on @elear’s post) that translations which would be graded lower than Poor in quality or with more than 5 major mistakes or 10 minor mistakes every thousand word should receive a zero score and no upvote at all. I don’t think the questionnaire needs to take those cases into consideration.
Particularly because if the translation ends up being so terrible, it’s the LM who ends up doing the real translation work, with the translator still getting a bigger upvote.

Sort:  

Regarding the accuracy and legibility, each word can have several meanings so it's not enough to add one of the meanings in the translation but it also has to make sense in the context of the text, hence the question about legibility. Having these two questions increase the granularity of the questionnaire. About the difficulty question, we think the questionnaire will have to be accompanied by a document or guideline. There we could make a table with all the suggested difficulties for each project. Anyway now that question has much less impact than before. The question D7 considers the feedback from one of the asian teams, for them is not always straight forward to translate words that are common for us.

each word can have several meanings so it's not enough to add one of the meanings in the translation but it also has to make sense in the context of the text, hence the question about legibility

That’s my point, though. If it doesn’t make sense in context, the translation is not accurate. I can’t imagine a case where accuracy and legibility wouldn’t go together because I consider legibility as part of accuracy, where translation is concerned. Although, of course, the opposite might be true: a translation could be 100% wrong but written in impeccably legible form. But if it’s a bad translation it shouldn’t be scored, regardless of legibility.

Coin Marketplace

STEEM 0.30
TRX 0.12
JST 0.034
BTC 63688.35
ETH 3125.30
USDT 1.00
SBD 3.97