You are viewing a single comment's thread from:

RE: Questions about AI / Вопросы по AI

in Steem Alliancelast year

As soon as a text is automatically translated (and that's (unfortunately) most texts presented to us in English here, especially of course if it's written with "foreign" characters, so for me also Russian), the review shows AI-generated to a large extent. Sent several times through the translator (ex. Bengali to English (already from the author) to Spanish (from Venezuelan community admin) to Russian (from sc08)), the AI portion becomes higher and higher.
Why? Because the automatic translators are also AI that simply have phrases they repeat over and over stored as records.
You must not rely on these checkers! At least not if you don't know the checked language yourself.
In German, I recognize an AI-generated text pretty quickly even without a test program. English is more difficult (although I speak English quite well), other languages impossible!

Translated with www.DeepL.com/Translator (free version)

Sort:  

I agree with you. I also feel the text in Russian, and when common bland faceless phrases come across, it is alarming and raises doubts about authorship.

But checking texts for AI is one of the mandatory conditions for curation. As a curator, I am obliged to do this and I do it (especially when a post on some topic that goes beyond the personal experience of the user and even with pictures from the network).

I specifically wrote this post to draw the attention of the community to this aspect of posting. Apparently it's time to change the rules of curation and abolish the AI check.

At the same time, the check for plagiarism should remain. Here search engines cope correctly. They provide a link to the original source.

Here are the results of checking my comment, which I translated into English by Yandex translator.

image.png

Haha, I've always been particularly suspicious (not just in my three-month sc attempt) when a post from users I didn't know seemed "way too" good. In this case mostly it was a plagiarism (or a foreign text with transformed sentences).
I still prefer to rely on my intuition than on technology. And every sc curator should do the same instead of blindly following "instructions". There have been some very, very ridiculous incidents in this regard. E.g. a user was warned because 7% (!!!) of his post was supposedly AI-generated.
I can only speak for myself and do not know how other users are. For me, such behavior has the bitter taste of blind obedience - and thus falls (bitterly) back on various sc-curators. Such an adaptation (that is now kindly expressed, because I do not want to make myself completely unpopular) to "big curators" I never liked. Never - that is, just as well in times when there was no Steemit Team and its curating staff.

Edit
Sorry, I have to edit some minutes later, because I realize that I am moved by the subject matter. It makes me sad. Or angry. I do not know it myself so exactly. So:
Honestly, I believe that the Steemit Team around @steemcurator01 also sees the handling of problems (be it AI, be it plagiarism) the same way as I have presented it. Because they need employees who are able to think for themselves - no hangers-on who always say "yes and amen" (just to don't put their own advantages in danger...).

It's rather not a question of "blind obedience" :)
I love the transparency of the rules of the game.
If any rule of the game exists, it must be executed by all players. It's like a standard.
But we all know from life that there are double standards, "sacred cows" that cannot be touched.
If the implementation of a rule is impossible, then it must either be corrected or eliminated altogether.

Again, if I rely on my intuition and assume that the text is generated by AI... and what's next? This is my assumption and now I have to confirm it. The question is... what? And again we return to the verification services ... which we don't trust...

I am in favor of eliminating this rule on the basis that its observance is incorrect. But this is only my private opinion and the decision of those who create the rules of this game is necessary. While there is no such decision, I have to either play by the rules or stop playing at all.

"Playing" in a team means to me, in case of doubt (and that's the only thing it can be about when using AI checkers) to exchange with other team members and together make a decision.
Using the technology is okay. But this should only be a clue, not a decision maker.

I agree, the decision is made by a person)

Edit
Sorry, I have to edit some minutes later, because I realize that I am moved by the subject matter. It makes me sad. Or angry. I do not know it myself so exactly. So:
Honestly, I believe that the Steemit Team around @steemcurator01 also sees the handling of problems (be it AI, be it plagiarism) the same way as I have presented it. Because they need employees who are able to think for themselves - no hangers-on who always say "yes and amen" (just to don't put their own advantages in danger...).

I would like to hope that Steemit Steam thinks the same way, and I would like to hear their opinion to dispel doubts. It is very good when there is a clear and specific algorithm of work, but I understand that it is impossible to foresee all situations. However, I always want more certainty in my work :)

Coin Marketplace

STEEM 0.20
TRX 0.13
JST 0.030
BTC 66620.20
ETH 3494.63
USDT 1.00
SBD 2.72