From mboxrd@z Thu Jan 1 00:00:00 1970 From: Laura Lazzati Subject: Re: GNU Guix Video Documentation Date: Mon, 29 Oct 2018 09:47:56 -0300 Message-ID: References: <20181025235334.7ebf5970@alma-ubu> <87efcd5oqn.fsf@elephly.net> <20181026120004.5b99860c@alma-ubu> Mime-Version: 1.0 Content-Type: multipart/alternative; boundary="000000000000a31e9d05795d7cdb" Return-path: Received: from eggs.gnu.org ([2001:4830:134:3::10]:45001) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gH6yF-0000GT-35 for guix-devel@gnu.org; Mon, 29 Oct 2018 08:48:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gH6yD-0007hc-Tr for guix-devel@gnu.org; Mon, 29 Oct 2018 08:48:38 -0400 In-Reply-To: List-Id: "Development of GNU Guix and the GNU System distribution." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-devel-bounces+gcggd-guix-devel=m.gmane.org@gnu.org Sender: "Guix-devel" To: =?UTF-8?Q?G=C3=A1bor_Boskovits?= Cc: Guix-devel , Ricardo Wurmus --000000000000a31e9d05795d7cdb Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, Oct 29, 2018 at 5:17 AM G=C3=A1bor Boskovits = wrote: > Laura Lazzati ezt =C3=ADrta (id=C5=91pont: 2= 018. > okt. 29., H, 0:27): > > > Ok,let me see if I am understanding. For the audio, some people will > > have to say "I speak X language" and narrate that part, By video you > > mean for screencasted, like changing the people that appear? > > This would be optimal, as this way the lips do match. > > I thought that maybe only replacing the audio was OK. > > In certain cases it is. One option would be to retain the > original screencast, or we could even decide to not show > the person speaking. I proposed to have the narrator > video overlayed onto the screen recording to have > this flexibility > > > When I did my little > > research about concepts for videos, I read that you can mute the > > original video, and add another voice, even if the lips don't match, > > and even add the options of choosing subtitles (For example: have the > > video with English speaker and choose not to add the subtitles, or add > > english/spanish/french/choose whatever language you like, too, even > > some of them are made with extra comments like [writing on a board] - > > I don't remember the name of that - or have the video in Spanish and > > do the same. > > This is partially the reason that I proposed to decompose the narration > video into a separate audio and video stream from the start. A lot of > containers support multiple subtitle and multiple audio tracks, and it > is even possible for a player to autoselect them, for example based on > locale. This way we could provide an output, where narration video is > not overlayed, but an unlocalized screen recoding is available with > all the translated audio and subtitles. The extra comments and > clarification > information is represented by the screen recording subtitles in my setup, > as these are most likely to connect to what you see on screen, rather tha= n > what the narrator says, but it is perfectly possible to have a third set = of > subtitles, independent of the recordings. > > > For non-screencasted translating the slides and CLI > > comands I thought it was easier. I could do the spanish translations, > > at least for the subtitles, but if we can parallelize that, the > > better. > > I think this one is for a later time, but it would be nice if you could > help > with translations. One great benefit would be if we could translate at > least a few videos, that we could test the video translation infrastructu= re > and write up a workflow for further translators. Sure! I mentioned the subtitles because of my accent - I speak Spanish, but the spanish from Buenos Aires, Argentina. I can write in neutral latin american spanish, but my spoken accent is different to the latin american audios from movies, and much more different to spanish from Spain. Even to different places of Argentina. But we could give it a try and make is as neutral as possible :) Or at least have a prototype for later. > I believe that at least for the subtitles part > > Best regards, > g_bor > --000000000000a31e9d05795d7cdb Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Mon= , Oct 29, 2018 at 5:17 AM G=C3=A1bor Boskovits <boskovits@gmail.com> wrote:
Laura Lazzati <laura.lazzati.15@gmail.com> ezt =C3=ADrta (id= =C5=91pont: 2018.
okt. 29., H, 0:27):

> Ok,let me see if I am understanding. For the audio, some people will > have to say "I speak X language" and=C2=A0 narrate that part= , By video you
> mean for screencasted, like changing the people that appear?

This would be optimal, as this way the lips do match.

I thought that maybe only replacing the audio was OK.

In certain cases it is. One option would be to retain the
original screencast, or we could even decide to not show
the person speaking. I proposed to have the narrator
video overlayed onto the screen recording to have
this flexibility

> When I did my little
> research about concepts for videos, I read that you can mute the
> original video, and add another voice, even if the lips don't matc= h,
> and even add the options of choosing subtitles (For example: have the<= br> > video with English speaker and choose not to add the subtitles, or add=
> english/spanish/french/choose whatever language you like, too, even > some of them are made with extra comments like [writing on a board] -<= br> > I don't remember the name of that - or have the video in Spanish a= nd
> do the same.

This is partially the reason that I proposed to decompose the narration
video into a separate audio and video stream from the start. A lot of
containers support multiple subtitle and multiple audio tracks, and it
is even possible for a player to autoselect them, for example based on
locale. This way we could provide an output, where narration video is
not overlayed, but an unlocalized screen recoding is available with
all the translated audio and subtitles. The extra comments and clarificatio= n
information is represented by the screen recording subtitles in my setup, as these are most likely to connect to what you see on screen, rather than<= br> what the narrator says, but it is perfectly possible to have a third set of=
subtitles, independent of the recordings.

> For non-screencasted translating the slides and CLI
> comands I thought it was easier. I could do the spanish translations,<= br> > at least for the subtitles, but if we can parallelize that, the
> better.

I think this one is for a later time, but it would be nice if you could hel= p
with translations. One great benefit would be if we could translate at
least a few videos, that we could test the video translation infrastructure=
and write up a workflow for further translators.
Sure! I = mentioned the subtitles because of my accent - I speak Spanish, but the spa= nish from Buenos Aires, Argentina. I can write in neutral latin american sp= anish, but my spoken accent is different to the latin american audios from = movies, and much more different to spanish from Spain. Even to different pl= aces of Argentina. But we could give it a try and make is as neutral as pos= sible :) Or at least have a prototype for later.=C2=A0
I believe that at least for the subtitles part=C2=A0 <= br>

Best regards,
g_bor
--000000000000a31e9d05795d7cdb--