NSFW AI chatbots

SweatyDevil

Active Member
Jan 8, 2022
518
1,370
How please?
1. Go to openrouter.ai and make account.
2. After making account, click on settings, then scroll down to default model and select Deepseek:R1(Free).
3. Go to keys, and create and api key. Be carefull and don't forget to save it as you can never see it again.
4. Go to janitor and find proxy compatible bots.
5. Once in the chat, click on the api settings and select proxy, then select custom.
6. For model type in (All in lovercase): deepseek/deepseek-r1:free
7. For the url, type in exactly this link:
8. Click on the api key and paste the key you saved earlier.
9. Click save settings and when pop up ask you to reset temperature back to normal click yes.
10. Click on generation settings and set the temperature between 0.8 - 0.9 and tokens size to 1000 to avoid cutt off messages.

This should be all to make the free version work. I never used paid api on openrouter, so I don't know how to set it up.
 
  • Heart
  • Wow
Reactions: fbass and D0v4hk1n

D0v4hk1n

Member
Oct 4, 2017
486
684
1. Go to openrouter.ai and make account.
2. After making account, click on settings, then scroll down to default model and select Deepseek:R1(Free).
3. Go to keys, and create and api key. Be carefull and don't forget to save it as you can never see it again.
4. Go to janitor and find proxy compatible bots.
5. Once in the chat, click on the api settings and select proxy, then select custom.
6. For model type in (All in lovercase): deepseek/deepseek-r1:free
7. For the url, type in exactly this link:
8. Click on the api key and paste the key you saved earlier.
9. Click save settings and when pop up ask you to reset temperature back to normal click yes.
10. Click on generation settings and set the temperature between 0.8 - 0.9 and tokens size to 1000 to avoid cutt off messages.

This should be all to make the free version work. I never used paid api on openrouter, so I don't know how to set it up.
I tried this

Getting an error message unfortunately

A network error occurred, you may be rate limited or having connection issues: Failed to fetch (unk)

Followed each step. It detects 128k limit but not working unfortunately.
 

SweatyDevil

Active Member
Jan 8, 2022
518
1,370
I tried this

Getting an error message unfortunately

A network error occurred, you may be rate limited or having connection issues: Failed to fetch (unk)

Followed each step. It detects 128k limit but not working unfortunately.
Not sure what could be causing this. Maybe change the context size? I'm having mine on 16,3k and it works for me fairly well.
 
  • Like
Reactions: D0v4hk1n

chainedpanda

Active Member
Jun 26, 2017
661
1,211
I tried this

Getting an error message unfortunately

A network error occurred, you may be rate limited or having connection issues: Failed to fetch (unk)

Followed each step. It detects 128k limit but not working unfortunately.
I don't use janitor, but I did try confirm it before. I followed a post that seems identical (possibly copy pasted) to others instructions.

You may just have had bad luck. Deepseek does have moments of network congestion (model is too popular, not enough servers). It's much more noticeable when using R1 (I assume it's due to the reasoning process) and when using the main site. You may have just been unlucky. I've gone through a few bad attempts myself, and through streaming on Sillytavern, sometimes messages just stop completely.

You'll also have to be cautious about refreshing the message. There is a limit to the amount of requests you can send in a given time frame (like 1 every 15 seconds or something). That could have gotten you somehow?

I've also seen someone claiming that Deepseek lumps ERP users together into a lower tier that basically gives us lower priority. But I've seen no evidence.
 

D0v4hk1n

Member
Oct 4, 2017
486
684
Not sure what could be causing this. Maybe change the context size? I'm having mine on 16,3k and it works for me fairly well.
Update

I changed context size to 16k as per your advice

I also deleted the custom prompt that was set there. Happy to report it now works!!
 
  • Yay, update!
Reactions: SweatyDevil

fbass

Active Member
May 18, 2017
522
774
1. Go to openrouter.ai and make account.
2. After making account, click on settings, then scroll down to default model and select Deepseek:R1(Free).
3. Go to keys, and create and api key. Be carefull and don't forget to save it as you can never see it again.
4. Go to janitor and find proxy compatible bots.
5. Once in the chat, click on the api settings and select proxy, then select custom.
6. For model type in (All in lovercase): deepseek/deepseek-r1:free
7. For the url, type in exactly this link:
8. Click on the api key and paste the key you saved earlier.
9. Click save settings and when pop up ask you to reset temperature back to normal click yes.
10. Click on generation settings and set the temperature between 0.8 - 0.9 and tokens size to 1000 to avoid cutt off messages.

This should be all to make the free version work. I never used paid api on openrouter, so I don't know how to set it up.
I couldn't get this to work at all. Finally figured out you need to refresh the page after you set everything up.
 

D0v4hk1n

Member
Oct 4, 2017
486
684
I couldn't get this to work at all. Finally figured out you need to refresh the page after you set everything up.
Glad you got it to work.

Unfortunately, I am getting better results with JAI LLM tbh

I stopped using the Proxy.
 

fbass

Active Member
May 18, 2017
522
774
Glad you got it to work.

Unfortunately, I am getting better results with JAI LLM tbh

I stopped using the Proxy.
Yeah, my results were horrible, had to stop using it. But I did find deppseek3free and the difference is might and day.
 

desmosome

Conversation Conqueror
Sep 5, 2018
6,501
14,841
Glad you got it to work.

Unfortunately, I am getting better results with JAI LLM tbh

I stopped using the Proxy.
Among all the LLM models I tried, JAI LLM is one of the worst. It's decent when you are new to it, but it gets extremely repetitive. "Shiver down spines," "ruin for anyone else," "nails on the back." Zzzzz. Every chat is the same.

To be fair, other models will also exhibit their quirks and get stale if you use it too much. So I think it really is very much necessary to switch around LLMs once in a while to keep things fresh.
 
  • Like
  • Heart
Reactions: D0v4hk1n and fbass

fbass

Active Member
May 18, 2017
522
774
Among all the LLM models I tried, JAI LLM is one of the worst. It's decent when you are new to it, but it gets extremely repetitive. "Shiver down spines," "ruin for anyone else," "nails on the back." Zzzzz. Every chat is the same.

To be fair, other models will also exhibit their quirks and get stale if you use it too much. So I think it really is very much necessary to switch around LLMs once in a while to keep things fresh.
It's not perfect but if you put this in your custom prompt it gets rid of most it that. "[{{char}} will use informal, casual, conversational language. {{char}} will not use overly flowery, formal, or Shakespearean language when speaking or describing actions. All dialogue should be written using common, easily understood language typical of normal, informal conversation. {{char}} will use a conversational style that fits their scripted personality, never straying from it regardless of what happens during the roleplay.]"
 
  • Hey there
Reactions: D0v4hk1n

D0v4hk1n

Member
Oct 4, 2017
486
684
Among all the LLM models I tried, JAI LLM is one of the worst. It's decent when you are new to it, but it gets extremely repetitive. "Shiver down spines," "ruin for anyone else," "nails on the back." Zzzzz. Every chat is the same.

To be fair, other models will also exhibit their quirks and get stale if you use it too much. So I think it really is very much necessary to switch around LLMs once in a while to keep things fresh.
True. I mainly switch between JAI and Spicychat myself

As for Deepseek, should I add the jailbreak prompt to the special promot section of the PROXY setting of JAI to make it work better?

Also, do you know why it cuts off if it has such a large context? Pretty weird tbh
 

fbass

Active Member
May 18, 2017
522
774
True. I mainly switch between JAI and Spicychat myself

As for Deepseek, should I add the jailbreak prompt to the special promot section of the PROXY setting of JAI to make it work better?

Also, do you know why it cuts off if it has such a large context? Pretty weird tbh
I put mine in the proxy setting. Use deepseek3, it's actually amazing.
 
  • Like
Reactions: D0v4hk1n

desmosome

Conversation Conqueror
Sep 5, 2018
6,501
14,841
I've been using deepseek R1 (free) on openrouter, and I take back what I said. This model is probably the best one I've used so far. R1 does something rather unique among the models I tried. When you write a prompt, it goes through a step where it thinks about your prompt and what it should be doing first.

Example:
Okay, so the user wants Jennifer to contemplate her life choices and how she ended up here.
The situation:
-Jennifer is locked in someone's basement.
-Jennifer only remembers leaving the party.
The contemplation:
-Jennifer should be regretting her decision to sneak out and go to the party.
-bla bla bla
Plot developments:
-bla bla bla
-bla bla bla

After this thinking step, it will go into the roleplay. The messages are divided into these two parts, the thinking and carrying out the instructions.

This thinking step is incompatible with many people's idea of a roleplay chat, but when you actually read what the bot is thinking before generating the roleplay, it's SUPER FUCKING ACCURATE. It usually knows exactly what you are asking and what it should be doing to make the roleplay interesting and in line with the prompt. So the generated response using those guidelines they thought up generally results in better outputs than the other models that probably goes straight into generating words. If you are using openrouter, you can go to settings and block Targon provider. Targon is the one that includes this thinking process in the output message. The other one that provides deepseek apparently doesn't include it, and it worked for me. But the model is still thinking behind the scenes when generating, so it cooks up some nice shit.

Edit: Well, I guess thinking models are not some new thing though. o1 models are thinking models from OpenAI.
 
Last edited:
  • Like
Reactions: D0v4hk1n

D0v4hk1n

Member
Oct 4, 2017
486
684
I take back everything I said about Deepseek (using R1 free from openrouter) on JanitorAI

Now that I've played with it a bit and added the jailbreak prompt in the proxy custom prompt I'm impressed with how vastly superior it can get. It does however make JAI go a bit crazy as it constantly asks you to reload and check that you're human lol

Another disadvantage is sometimes it hallucinates by a LOT

Other than, it's really good. Like night and day difference.

It used to show me <think> but now it no longer does. Is that normal?
 

Daba

Member
Jan 22, 2018
285
241
It will probably cheer someone up, Kindroid now has unlimited messages for free users.
I'm not sure how much worse the free LLM version is than the paid one, but of all the ones I've tried, Kindroid has the best AI companion and ERPG.
 
  • Like
  • Heart
Reactions: fbass and Geigi

Geigi

Well-Known Member
Jul 7, 2017
1,596
3,282
It will probably cheer someone up, Kindroid now has unlimited messages for free users.
I'm not sure how much worse the free LLM version is than the paid one, but of all the ones I've tried, Kindroid has the best AI companion and ERPG.
Good news. I haven't been on Kindroid because JanitorAI is eating my time.
 

D0v4hk1n

Member
Oct 4, 2017
486
684
Guys any reason JAI x Deepseek R1 keeps making JAI refresh and recheck if I'm human and sometimes fail to generate response?

Also, another update on my experience with R1.

I tried a Game Of Thrones RPG and I am so impressed by the responses. I am legit hooked! Like holy shit the responses are book accurate! R1 is smart.