Watch AI vs AI debate over useless topic or interesting topic.
Note
OK, this is gonna be bit confusing to set up. Be prepared for your eyes with bad writing!!!! wink wink
Made by Preplexity AI.
Mostly AI/LLM models are pretty too kind and nice, if you want mean and rude results -> uncersoned AI/LLM models are better.
Dolphin AI/LLM models are recommended.
Warning
This was only tested on Linux and mid-horsepower GPU (GTX 1660 SUPER). Everything else is unknown, expect errors and stuff. Tinker with it.
Local AI/LLM models only. (if you're bit insane, then you can make it into cloud)
Inspired by DougDoug's video about locking AIs in endless debate.
This version is more trash but more local-ized and more uncensored (we love this!!!!!!)
Human-made poem (bad writing, but explains small details)
GOTTEM! i will not make this joke ever again.
Anyways, you may need to make folders in the folder:
- scripts (where the python files and stuff will live)
- models (where the AI/LLM models will live)
- database (where the AI's memory will live (plus they have dementia for extra coolness after 5 messages they gone))
Though, I used interpreter (.venv or something like that) for this.
Watch out for missing imports and stuff like that, cuz i forgot all of it.
GGUF may be heavy, used LLM instead myself.
You may need to download LLM/AI model of your choice from model_downloader9000.py
YOU HAVE TO PUT THE DOWNLOADED MODEL FILE INTO models/ NEXT TO scripts/ FOLDER IF IT WAS MISPLACED/DOESNT WORK!!!!!!!!!!!!!!!!!!
AI-made poem (good writing, explains big details)
An experiment in adversarial AI interaction — two local language models debate each other on any topic you give them, with a live GUI to watch, intervene, and control the action.
├── debate_gui.py # Entry point — the debate GUI
├── debate_system.py # Core debate logic (turns, prompts, auto mode)
├── dual_gguf_inference.py # Loads and manages two GGUF models via llama.cpp
├── gguf_inference.py # Single GGUF model wrapper
├── memory_system.py # SQLite-backed conversation history
└── database/
└── conversation.db # Auto-created on first run
debate_gui.py
└─ debate_system.py
├─ dual_gguf_inference.py → gguf_inference.py (llama.cpp)
└─ memory_system.py
pip install llama-cpp-pythonFor GPU support:
CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python --force-reinstallDownload a .gguf file — e.g. Dolphin 3.0 Llama 3.2 3B — and place it in a models/ folder next to the scripts.
In dual_gguf_inference.py, update the default model name to match your file:
DEFAULT_MODEL_NAME = "Dolphin3.0-Llama3.2-3B-IQ4_NL.gguf"Or pass paths directly when instantiating DualGGUFInference.
python debate_gui.py| Control | Description |
|---|---|
| Topic field + Start | Begin a new debate on any topic |
| Continue | Advance one turn manually |
| Pause / Resume | Freeze and unfreeze the debate |
| Auto: ON/OFF | Let the debate run continuously without clicking |
| Force Alpha / Beta | Force a specific AI to speak next |
| Your Input | Interject as a third voice — pauses the AIs |
| Alpha / Beta Prompt | Edit each AI's system prompt live mid-debate |
| Save | Export the full debate to JSON |
| Clear | Wipe conversation history |
- You provide a topic.
- AI-Alpha opens with an argument.
- AI-Beta responds, disagreeing entirely.
- They alternate turns indefinitely.
- Each AI only sees the last 5 messages as context — no memory beyond that window.
- You can jump in at any time, adjust their system prompts mid-debate, or force a turn.
The default prompts make each AI maximally adversarial — they'll disagree with everything the other says. You can tune this down or change it entirely via the prompt editors in the GUI.
- Conversation history is stored in
database/conversation.db(SQLite). Delete this file to start fresh between debates. - GPU layers (
n_gpu_layers) indual_gguf_inference.pydefault to 35 — reduce this if you run out of VRAM with two models loaded simultaneously. - Both models can be the same file (the default) or two different GGUF models for more varied outputs.