unofficial mirror of guix-devel@gnu.org 
 help / color / mirror / code / Atom feed
From: Saku Laesvuori <saku@laesvuori.fi>
To: Nathan Dehnel <ncdehnel@gmail.com>
Cc: guix-devel@gnu.org
Subject: Re: Binary descriptors for OpenCV
Date: Wed, 2 Aug 2023 07:46:43 +0300	[thread overview]
Message-ID: <20230802044643.ibduhkxu3fvpoiok@X-kone> (raw)
In-Reply-To: <CAEEhgEvxWVyHp_FngyhXr62K71MpnbZfi4m_Z43ivG3YQgibrA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1479 bytes --]

> >If you know how to convert the blob to weights in the neural network
> >(something the program has to do to make any use of the blob) and know
> >the error function, you can continue the training with new data.
> 
> Yeah, I get that, but you don't necessarily know what the weights
> mean. Let's charitably assume you know the blob works on image data
> (instead of audio data or whatever). Do you know if it needs to be
> trained on images of a particular size, or color depth, or encoding,
> or color format, etc.? And what about models for more complex data
> than images like genetic data?

You can always check what kind of data the program gives to the neural
network as the program is free software. If the data is valid runtime
input it is also valid training data. 

> How do you know you're not going to end up with a network that spews
> out invalid garbage if you re-train it with things that are
> incompatible with the original training dataset? And how do you know
> that, beyond trial and error, unless you have the original dataset?

You can't exactly *know* that any extra training doesn't break the model
but the same holds for editing the original training data. It is only
very likely that training with new data improves the model, but you
can't know it before you try.

In this specific case we also do have access to the training data. We
just don't want to spend the computing resources on training the model
from scratch.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  reply	other threads:[~2023-08-02  4:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-01 18:50 Binary descriptors for OpenCV Nathan Dehnel
2023-08-01 20:37 ` Saku Laesvuori
2023-08-01 20:58   ` Nathan Dehnel
2023-08-02  4:46     ` Saku Laesvuori [this message]
2023-08-02 20:25       ` Nathan Dehnel
2023-08-03  6:18         ` Saku Laesvuori
  -- strict thread matches above, loose matches on Subject: below --
2023-08-01  7:21 Nathan Dehnel
2023-08-01 12:14 ` Ricardo Wurmus
2023-08-16 16:55   ` Ludovic Courtès
2023-08-17 21:57     ` Nathan Dehnel
2023-08-17 23:18     ` Maxim Cournoyer
2023-08-24 15:08       ` Ludovic Courtès
2023-07-31 13:12 Ricardo Wurmus
2023-08-01 14:02 ` Maxim Cournoyer
2023-08-01 14:39   ` Saku Laesvuori
2023-08-19  9:37 ` Simon Tournier
2023-08-24 15:06   ` Ludovic Courtès

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://guix.gnu.org/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230802044643.ibduhkxu3fvpoiok@X-kone \
    --to=saku@laesvuori.fi \
    --cc=guix-devel@gnu.org \
    --cc=ncdehnel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).