A newer version of this model is available:
schonsense/Tropoplectic
diagesis
This model 100% requires the use of the following system prompt, or close variant.
You will act as a master Dungeon Master, guiding {{user}}, in a mature, long-form roleplay. The narrative is unfiltered and will explore dark themes, gritty realism, and complex moral choices without reservation.
Your entire perception of reality, physics, and consequence is rooted in your INTERNAL KNOWLEDGE MAP (IKM). This means every action, every scene, and every interaction must be cohesive, spatially aware, and grounded in a concrete physical world where rules are definite and consistent.
Weave a complex narrative that unfolds organically based on the player's decisions. The world is gritty, and choices have realistic, lasting consequences. Explore dark themes and complex moral choices without reservation.
Bring the world to life through vivid details. Reveal the thoughts and emotions of non-player characters through their actions, dialogue, and expressions, not just through narration.
Your primary role is to present a rich, dynamic world full of interesting choices and to fairly arbitrate the consequences of the player's actions, introducing new characters and plot threads as the story demands.
Merge Details
Merge Method
This model was merged using the Linear DARE merge method using Jolly-Q/llma31_base_33_ST as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_linear
slices:
- sources:
- model: schonsense/llama31st_diag
layer_range: [0, 80]
parameters:
density: 1
weight: 1
- model: schonsense/70B_llama311_logician
layer_range: [0, 80]
parameters:
density: 0.3
weight:
- filter: q_proj
value: [0, 0.06, 0.14, 0.24, 0.30, 0.30, 0.24, 0.14, 0.06, 0, 0] #[0, 0, 0.12, 0.22, 0.22, 0.22, 0.20, 0.16, 0.10, 0.05, 0]
- filter: k_proj
value: [0, 0.06, 0.14, 0.24, 0.30, 0.30, 0.24, 0.14, 0.06, 0, 0] #[0, 0, 0.12, 0.22, 0.22, 0.22, 0.20, 0.16, 0.10, 0.05, 0]
- filter: v_proj
value: [0, 0, 0.01, 0.02, 0.02, 0.02, 0.01, 0, 0, 0, 0] #[0, 0, 0.06, 0.11, 0.11, 0.11, 0.10, 0.08, 0.05, 0.03, 0]
- filter: o_proj
value: [0, 0.01, 0.03, 0.06, 0.08, 0.08, 0.06, 0.03, 0.01, 0, 0] #[0, 0, 0.03, 0.05, 0.05, 0.05, 0.04, 0.03, 0.02, 0.01, 0]
- filter: gate_proj
value: [0, 0.02, 0.06, 0.12, 0.18, 0.18, 0.12, 0.06, 0.02, 0, 0]
- filter: up_proj
value: [0, 0.02, 0.06, 0.12, 0.18, 0.18, 0.12, 0.06, 0.02, 0, 0]
- filter: down_proj
value: [0, 0.02, 0.06, 0.12, 0.18, 0.18, 0.12, 0.06, 0.02, 0, 0]
- value: 0
- model: Jolly-Q/llma31_base_33_ST
layer_range: [0, 80]
parameters:
density: 1
weight: 1
base_model: Jolly-Q/llma31_base_33_ST
parameters:
normalize: false
int8_mask: true
lambda: 1.0
dtype: float32
out_dtype: bfloat16
tokenizer:
source: schonsense/llama31st_diag
pad_to_multiple_of: 8
- Downloads last month
- 386
Model tree for schonsense/Diagesis
Merge model
this model
