Sillytavern memory reddit. A place to discuss the SillyTavern fork of TavernAI.
Sillytavern memory reddit Example Dialogue will be pushed out of memory once your chat starts maxing out the AI's memory. ST always keeps the Description in the AI's memory. If you're on Windows, I'd try this: right click taskbar and open task manager. I'm assuming, a context length is some sort of memory, related to chat. SillyTavern has a JB, already activated by default. With the model loaded and at 4k, look at how much Dedicated GPU memory is used and Shared GPU memory is used. I'm looking for something that enhances it as an AI companion for long-term chats over a period of months rather than just for a roleplay. A place to discuss the SillyTavern fork of TavernAI. For complex information there is also the World Information (aka Lorebooks). Right now I use mainly GPT-3. first of all, let's say you loaded a model, that has 8k context(how much memory the AI can remember), first what you have to do is go to the settings(the three lines to the far left): **So What is SillyTavern?** Tavern is a user interface you can install on your computer (and Android phones) that allows you to interact text generation AIs and chat/roleplay with characters you or the community create. 8 which is under more active development, and has added many major features. 5 and I easily get to 1000+ on single chat pretty easily. We would like to show you a description here but the site won’t allow us. What does this mean? remember when I said context is AI memory? then let's assume you have exactly 8000 contexts tokens, permanent tokens mean that these tokens will always be present in the AI memory, meaning that if the card is using 1000 permanent tokens, it means you only actually have 7000 contexts to work with when chatting. Jan 10, 2025 · SillyTavern (or ST for short) is a locally installed user interface that allows you to interact with text generation LLMs, image generation Dec 19, 2024 · I've been always very interested in long term memory using SillyTavern as an AI Chat Companion and anything that can improve the long term memory The best place for fast help with SillyTavern issues is joining the discord! We have lots of moderators and community members active in the help sections. At this point they can be thought of as completely independent programs. I understand some of character description helps with memory, as character description helps with recognising some facts. Once you join there is a short lobby puzzle to verify you have read the rules: https://discord. Tried RP with GPT and it was almost too good, freaked me out haha. SillyTavern is a fork of TavernAI 1. Now its not without pain, the summary must be created and entered into the database manually since this is not a supported feature of SillyTavern. . Click Performance tab, and select GPU on the left (scroll down, might be hidden at the bottom). This is a good **So What is SillyTavern?** Tavern is a user interface you can install on your computer (and Android phones) that allows you to interact text generation AIs and chat/roleplay with characters you or the community create. At this **So What is SillyTavern?** Tavern is a user interface you can install on your computer (and Android phones) that allows you to interact text generation AIs and chat/roleplay with characters you or the community create. **So What is SillyTavern?** Tavern is a user interface you can install on your computer (and Android phones) that allows you to interact text generation AIs and chat/roleplay with characters you or the community create. Anyway, I do understand the fact: model has no memory, is not able to remember anything. If you just want normal RP, you can modify the prompt. gg/sillytavern When a match is found from our prompt, it only extract the summarized memory when needed, which should have good context of that memory, and makes it a great long term memory system. To add things permanently, look at the Authors Note or just write it directly into the advanced formatting textbox. Don't put or leave "extra description" in the Example Dialogue field. It helps to maintain continuity without having to remember *every* token from the beginning of the chat. If you see it in an imported character, delete it or it might confuse the AI. Sillytavern has the optional modules, such as memory summarisation, character reactions if you set them up, it auto connects if you hook it up with openai or oobabooga local. Apr 21, 2023 · In general I'm curious about the current state of the memory extension and if you have any plans or wishes for it. context size is basically how much your bot samples from its character data and chat history to generate its response — and the more it samples, the more memory it needs to do so. From all the ways I've read so far, thanks to FieldProgrammable, the following seems to be the different ways possible so far (extract from one of his post): Lorebooks. There is no END_OF_DIALOG tag in ST Example Dialogue. memory = memory. I highly recommend using the SillyTavern Summarize engine you can set it to automatically write a summary every 10ish messages, which it then keeps in context. 2. To improve the models 'memory' in very long roleplays, look at the Summarize or Vector Database extensions. Members Online my context size is set to around 1400 and i’ve never had a memory issue, even with heavy use. I'm interested in this because it solves a HUGE problem in current generation LLMs with losing the state of a conversation. As for context, it's just that. And it's also going to get a new update which adds even more cool features. Make a note of what your shared memory is at. I've been trying for a whole week reading up finding a way to get long term memory with my new install of Silly Tavern. ok am going to assume you all just installed sillytavern and only know how to start chatting but have no idea what is going on. The default message is for GPT, doesn’t work with Claude. I haven’t used them extensively yet but could be the "memory" summariser plugin would be redundant with them. mhbs nnzr spbwti appe qhzak pejqamn ookumz ugome ufftr vkch gcpviz fhsxjs oesxa lst josdkqbm