Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't really like ai "art" bc its just image made by robots, i love the feelin…
ytc_UgxTDgzrd…
G
Do you guys think debating AI can be useful? I research the topic beforehand. On…
ytc_UgzrDpYL1…
G
Software engineer hai re baba aur Ai engineer ko experience aur masters karna pa…
ytr_UgyY_ctT6…
G
LOLLLL "none of this is true," bullshit buddy. Not all cultures are equal and if…
ytc_UgzAe_mT0…
G
AI cannot be fully understood or controlled by humans. AI teachers would eventua…
ytc_UgxthOCCn…
G
I Hope watchdogs designed this AI, they tend to leave the explicit details in 😅…
ytc_UgwSVgbmQ…
G
Everyone is talking about polar bears right now because AI uses a ton of water, …
ytc_Ugz8Beci3…
G
Someone WILL use AI for the wrong reasons...well, people already have... it's sc…
ytc_UgyHze89M…
Comment
Play with Ollama and oss-gpt:20b (works on any Windows PC with 32GB RAM, not VRAM) and you'll see that it simply *cannot* say: "I don't know"! Even with topics that it appears to have good data for, it will miss obvious things and/or invent "facts" to add to the narrative. Even on the strictest: "facts only" setting, if it simply has no idea about the topic, it will just invent complete BS, end of story. The biggest problem with current LLMs/AI is the assumption of accuracy of the reply *and* the implication of supreme confidence in replies. TBH, _no_ , this is invariably wrong, but the models are designed to generate *output* , regardless whether or not it is has any real, verifiable, confidence in the answer.
youtube
AI Moral Status
2025-10-30T19:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyrn1U0GWIUR3wKpN54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9mKKvDWvpyFX0XHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyzfSzKFM0QPVG2v94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAVu36ayojNyba9PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwd0SooxExdRlgxCrd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-cojJ_S3c-LnZatB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxq0yS22fQI8RihVXt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6yz-9PRGRbExhxIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgypwbOe83rLmRCVlC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzDJ9nbM8mnU_mg1ih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]