You are viewing a single comment's thread from:

RE: [BeyondBitcoin RSVP] Whaletank Hangout #228 | 2017-10-14 | Sat @230PM UTC | Over 650 $BD in Awards!

in #eos-project7 years ago

srt2vtt can now normalize captions via machine learning, as well as cut and/or splice video+audio+captions with the help of the excellent SubTitle Edit by adding actionable cues to the timeline.

Sort:  

So you send each cue one by one to the punctuator?

Sorry hadn't seen this.

that's exactly what we talked about, that's the version I had sent you, thought you had even tried it already!

Alex,

  • When did you build it in?
  • Did Chucky tried it?
  • How does it work? Like you don't have to edit and make sentences on YouTube anymore?
  • So where does it fit in the work-flow? (YT auto captioning, srt2vtt per subtitle track? So before mixing? Remember we use tracks separated for each speaker. Also if you insert speaker name will it mess up the alligner if we want to cut out the audio too later?)

Let's have a meeting and have a constructive group-chat pr talk about this say on Discord or Mumble.

This is very cool Alex, if you could make a video on how it works in action that'll be even better. I tried running it but mono crashed.

Coin Marketplace

STEEM 0.19
TRX 0.14
JST 0.030
BTC 60189.57
ETH 3204.49
USDT 1.00
SBD 2.44