On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote: > On Sat, Mar 18, 2023 at 9:36 AM wrote: [...] > > Perhaps you didn't know, but you are training the model :-) > > > > Unfortunately not. I'm prompting it within its 32000 token (GPT-4) > attention span. Next conversation the model is back to exactly the same > state again. Then, of course, it is possible that OpenAI chooses to filter > out something from the dialogs it has had. You don't think that those coversations end up as raw data for the next model? I'd be surprised, but you know definitely more than me. > So, a trick you can do is to start out every session with a standard set of > prompts (like "keep it short" or whatever) which will then act as a kind of > configuration. Oh, that's interesting! Thanks -- t