From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?Q?G=C3=A1bor_Boskovits?= Subject: Re: GNU Guix Video Documentation Date: Mon, 29 Oct 2018 09:17:06 +0100 Message-ID: References: <20181025235334.7ebf5970@alma-ubu> <87efcd5oqn.fsf@elephly.net> <20181026120004.5b99860c@alma-ubu> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Return-path: Received: from eggs.gnu.org ([2001:4830:134:3::10]:35447) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gH2jm-0001f9-Cn for guix-devel@gnu.org; Mon, 29 Oct 2018 04:17:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gH2jl-0005Kr-Hp for guix-devel@gnu.org; Mon, 29 Oct 2018 04:17:26 -0400 In-Reply-To: List-Id: "Development of GNU Guix and the GNU System distribution." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guix-devel-bounces+gcggd-guix-devel=m.gmane.org@gnu.org Sender: "Guix-devel" To: Laura Lazzati Cc: Guix-devel , Ricardo Wurmus Laura Lazzati ezt =C3=ADrta (id=C5=91pont: 201= 8. okt. 29., H, 0:27): > Ok,let me see if I am understanding. For the audio, some people will > have to say "I speak X language" and narrate that part, By video you > mean for screencasted, like changing the people that appear? This would be optimal, as this way the lips do match. I thought that maybe only replacing the audio was OK. In certain cases it is. One option would be to retain the original screencast, or we could even decide to not show the person speaking. I proposed to have the narrator video overlayed onto the screen recording to have this flexibility > When I did my little > research about concepts for videos, I read that you can mute the > original video, and add another voice, even if the lips don't match, > and even add the options of choosing subtitles (For example: have the > video with English speaker and choose not to add the subtitles, or add > english/spanish/french/choose whatever language you like, too, even > some of them are made with extra comments like [writing on a board] - > I don't remember the name of that - or have the video in Spanish and > do the same. This is partially the reason that I proposed to decompose the narration video into a separate audio and video stream from the start. A lot of containers support multiple subtitle and multiple audio tracks, and it is even possible for a player to autoselect them, for example based on locale. This way we could provide an output, where narration video is not overlayed, but an unlocalized screen recoding is available with all the translated audio and subtitles. The extra comments and clarificatio= n information is represented by the screen recording subtitles in my setup, as these are most likely to connect to what you see on screen, rather than what the narrator says, but it is perfectly possible to have a third set of subtitles, independent of the recordings. > For non-screencasted translating the slides and CLI > comands I thought it was easier. I could do the spanish translations, > at least for the subtitles, but if we can parallelize that, the > better. I think this one is for a later time, but it would be nice if you could hel= p with translations. One great benefit would be if we could translate at least a few videos, that we could test the video translation infrastructure and write up a workflow for further translators. Best regards, g_bor