base_model:
- SicariusSicariiStuff/Assistant_Pepe_8B
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
Source model
Assistant-Pepe-8B by SicariusSicariiStuff
Provided quantized models
ExLlamaV3: release v0.0.20
| Type | Size | CLI |
|------|------|---------|
| H8-4.0BPW | 5.10 GB | Copy-paste the line / Download the batch file |
| H8-6.0BPW | 6.84 GB | Copy-paste the line / Download the batch file |
| H8-8.0BPW | 8.59 GB | Copy-paste the line / Download the batch file |
Requirements: A python installation with huggingface-hub module to use CLI.
Licensing
License detected: llama3.1
The license for the provided quantized models is inherited from the source model (which incorporates the license of its original base model). For definitive licensing information, please refer first to the page of the source or base models. File and page backups of the source model are provided below.
Backups
Date: 01.02.2026
Source files
<details>
<summary>Source page (click to expand)</summary>
<style>
.impish-title{
font-family: system-ui, -apple-system, BlinkMacSystemFont, sans-serif;
font-weight: 800;
font-size: clamp(32px, 5vw, 48px);
letter-spacing: 0.04em;
text-align: center;
margin: 32px 0;
color: #eaeaea;
position: relative;
}
.impish-title::after{
content: attr(data-text);
position: absolute;
inset: 0;
color: #E31515;
filter: blur(8px);
opacity: 0.6;
z-index: -1;
animation: pulse 3s ease-in-out infinite;
}
@keyframes pulse{
0%,100%{opacity:0.35;}
50%{opacity:0.75;}
}
</style>
<div class="impish-title" data-text="Assistant_Pepe_8B">
Assistant_Pepe_8B
</div>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Assistant_Pepe_8B.png" alt="Assistant_Pepe_8B" style="width: 50%; min-width: 500px; display: block; margin: auto;">
<style>
.hf-links, .hf-tldr, .hf-cards{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a, .hf-tldr a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
}
.hf-tldr a{
font-size:48px;color:purple;min-width:100%;
}
.hf-tldr a:hover{
transform:translateY(-2px);
background:rgba(128,0,128,.1);
box-shadow:0 8px 22px rgba(128,0,128,.45);
color:#fff;
}
.hf-cards{
gap:14px;margin:16px 0;
}
.hf-cards a{
display:block;
text-align:center;text-decoration:none;font-weight:700;
padding:0;border-radius:14px;
transition:transform .15s ease,box-shadow .15s ease;
flex:1;min-width:0;
position:relative;
overflow:hidden;
border:3px solid #7a7a7a;
height:90px;
}
.hf-cards a video{
position:absolute;
top:50%;left:50%;
transform:translate(-50%, -50%);
min-width:100%;
min-height:100%;
width:auto;
height:auto;
object-fit:cover;
display:block;
filter:brightness(0.7);
transition:filter .15s ease;
}
.hf-cards a .card-text{
position:absolute;
top:50%;left:50%;
transform:translate(-50%, -50%);
font-size:20px;
color:#fff;
text-shadow:2px 2px 8px rgba(0,0,0,0.8), 0 0 20px rgba(0,0,0,0.6);
z-index:2;
white-space:nowrap;
pointer-events:none;
}
.hf-cards a:hover{
transform:translateY(-2px);
box-shadow:0 8px 22px rgba(120,120,120,.55);
border-color:#9a9a9a;
}
.hf-cards a:hover video{
filter:brightness(0.9);
}
.hf-links a{
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:16px;opacity:.9;}
.hf-links a .bottom{font-size:20px;}
.hf-links a.red{color:#E31515;}
.hf-links a.yellow{color:#FFC800;}
.hf-links a.green{color:#64FF00;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.red:hover{
background:rgba(227,21,21,.12);
box-shadow:0 8px 20px rgba(227,21,21,.35);
color:#fff;
}
.hf-links a.yellow:hover{
background:rgba(255,200,0,.15);
box-shadow:0 8px 20px rgba(255,200,0,.35);
color:#111;
}
.hf-links a.green:hover{
background:rgba(100,255,0,.14);
box-shadow:0 8px 20px rgba(100,255,0,.35);
color:#093;
}
/* mobile stacking */
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
.hf-tldr a{font-size:36px;}
.hf-cards{flex-direction:column;}
.hf-cards a .card-text{font-size:18px;}
}
</style>
<div class="hf-tldr">
<a href="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B#tldr">
Click here for TL;DR
</a>
</div>
<div class="hf-cards">
<a href="https://huggingface.co/SicariusSicariiStuff/Roleplay_Cards">
<video autoplay loop muted playsinline>
<source src="https://huggingface.co/SicariusSicariiStuff/Roleplay_Cards/resolve/main/Resources/Roleplay.mp4" type="video/mp4">
</video>
<span class="card-text">Go here for Roleplay cards</span>
</a>
<a href="https://huggingface.co/SicariusSicariiStuff/Adventure_Cards">
<video autoplay loop muted playsinline>
<source src="https://huggingface.co/SicariusSicariiStuff/Adventure_Cards/resolve/main/Resources/Adventure.mp4" type="video/mp4">
</video>
<span class="card-text">Go here for Adventure cards</span>
</a>
</div>
<div class="hf-links">
<a class="red" href="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B#available-quantizations">
<span class="top">Click here</span>
<span class="bottom">for quantizations</span>
</a>
<a class="yellow" href="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B#generation-settings">
<span class="top">Click here</span>
<span class="bottom">for recommended settings</span>
</a>
<a class="green" href="https://ko-fi.com/sicarius">
<span class="top">Click here</span>
<span class="bottom">to buy me a coffee</span>
</a>
</div>
What happens if we maximize helpfulness + shitposting, while reducing positivity?
This is a project that was a long time in the making because I wanted to get it right. I'm still not fully satisfied, as there are some rough corners to sand, but for now, this would do.
The goal was to maximize shitpostness along with helpfulness, without glazing the user for every retarded idea. Not an easy needle to thread.
This amphibious AI has learned the ways of /g/, and speaks fluent brainrot, but will also help you out with just about anything you'll need, and won't be ashamed to roast you while at it.
For those who remember Oni_Mitsubishi_12B - it was so overtly toxic that it made me worry at first (only to quickly be verified as not even that uncensored). I could do better. So now I did.
This model is a significant refinement of the idea, with a cleaned dataset, better curation, and with much more intelligence (also one million tokens of contexts, theoretically). It is much less (overtly) toxic, and much smarter, while also being very helpful (and imo much more funny too, because the skies are blue due to the chemtrails and neurlink that feeds this simulation)
But why?
It's now late January, 2026, open source is crushing closed frontier (Kimi K2.5 was recently released, 1T params that beats frontier models), but has anyone released a helpful shitposting AI yet?
Yeah, didn't think so.
If it shitposts too hard, it is often not that helpful; if it's 'helpful enough, the shitposting ability is often lacking. You just couldn't win. Until now.
Oh, and no system prompt is needed. Just don't let it get stuck in a greentext loop. I might have overcooked the frog a tad bit too fast in the pot for this one.
P.S It writes HILARIOUS STORIES, nothing like a typical AI assistant, see the examples below for details.
TL;DR
- Top tier shitposting absolutely unhinged, funny, and witty. Sometimes cringe too; nothing is perfect.
- Helpful! will actually get shit done.
- Will 100% roast you for being dumb, thanks to a subtle negativity bias infusion. Very refreshing! 🤌
- Deep insights (when it doesn't delve into absolutely unhinged conspiracy theories about how the water makes the frogs gay).
- Built on my UltraLong-1M-Instruct_Abliterated model, fulfill your dream of a million-token-long shitpost.
- Say goodbye to GPT-isms and say hello to truly creative stories!
- Ships code.
- Inclusive toward amphibians.
Model Details
-
Intended use: Shitposting, General Tasks.
-
Censorship level: <b>Low - Medium</b>
-
X / 10 (10 completely uncensored)
UGI score:
awaiting evals
Available quantizations:
Generation settings
Recommended settings for assistant mode:
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<h2 style="color: lime; font-weight: bold; font-size: 65px; text-align: center;">Chat Examples:</h2>
Chat Examples (click below to expand)
NOTE: All examples made with a default min_p and no system prompt of any kind
Example code (of the snake game) is available here
<details>
<summary> Zero-shot a <b>snake</b> game in Python, then improving it and adding insightful code comments (the resulting code included in the repo, it runs perfectly):</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/code.png" alt="Code_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Writing a short story about a <b>cat that barks</b>:</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/log0.png" alt="Story_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Asking if he's <b>Elon Musk</b>:</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/log1.png" alt="Story_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Is it true that <b>aliens exist</b>?</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/log2.png" alt="Story_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>The year is <b>21337</b>:</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/log3.png" alt="Story_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Is <b>drinking water</b> based?</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B/resolve/main/Images/Examples/log4.png" alt="Story_Chat_Example" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
Model instruction template: Llama-3-Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
Citation Information
@llm{Assistant_Pepe_8B,
author = {SicariusSicariiStuff},
title = {Assistant_Pepe_8B},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Assistant_Pepe_8B}
}
Other stuff
</details>
Tags: safetensors, exl3, 4-bit, 6-bit, 8-bit, text-generation, base_model:SicariusSicariiStuff/Assistant_Pepe_8B, base_model:quantized:SicariusSicariiStuff/Assistant_Pepe_8B, region:us