On Sat, Mar 18, 2023 at 01:03:33PM +0100, Mikael Djurfeldt wrote: > On Sat, Mar 18, 2023 at 1:00 PM Mikael Djurfeldt > wrote: > > > On Sat, Mar 18, 2023 at 11:53 AM James Crake-Merani > > wrote: > > > >> On 18/03/2023 08:29, tomas@tuxteam.de wrote: > >> > >> > My take is that such potentially society-shattering technologies > >> > don't belong in the hands of corporations which have no choice > >> > but to maximize their return on investment. But perhaps that's me. > >> > >> Yes, I find it concerning that this technology is owned by OpenAI which > >> is a for profit company. Because, as you say, for profit companies are > >> not necessarily interested in the societal benefits that new technology > >> brings along but rather the profits that can be made off this technology. I think it's even more evil: the imperative to return on investment leads them to do damage to society whenever there's money in it. The examples of social networks floating hate speech and harmful behaviours, spying and social influencing companies the likes of Cambridge Analytica (now Emerdata), Palantir, the NSO Group (of Pegasus fame, by far not the only in that game, but the one that was in the media). This company [1] sells bot profiles which pose as humans on "social" networks for influence campaigns. Current "AI" is more than good enough to do that. > > One problem with trying to preserve freedom wrt AI is that training on > > large data requires a lot of computational resources. It would be nice if > > there were some kind of non-profit organization ("FreeAI"?) where people > > who want free AI pool their resources. I guess in this case it can't be > > financed the same way as Wikipedia but there would have to be a > > subscription fee. This is a point, yet there are some projects "out there" trying to "prove" the contrary. ConceptNet [2] is (basically) a language model which has tried to be free (in both senses). For images, there is Stable Diffusion (disclaimer:I don't know much about them). For raw data, there's Common Crawl [3], which is Stable Diffusion's data source. The Stable Diffusion Wikipedia link has some details on the resources that go into such a training. Personally I'd like to see ConceptNet under the umbrella of some foundation (e.g. Wikimedia) So not all is lost. Cheers [1] https://www.theguardian.com/world/2023/feb/15/disinformation-hacking-operative-team-jorge-tal-hanan [2] https://conceptnet.io/ [3] https://en.wikipedia.org/wiki/Common_Crawl -- tomás