Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Flashbacks to when I posted my AI "art" to the discord (I regret my mistake).…
ytc_UgxhB-2oz…
G
Growing up I wanted to be an artist. I didn’t because everyone told me I’d never…
ytc_UgxR1Cp1i…
G
The government/military need data centers to handle the AI processing they need …
ytr_UgzEAHRgx…
G
The robot said singularity.. New world (order) 2029 or 4 years sooner..? Last mu…
ytc_UgyKmKHpy…
G
Attention getting cuts and shifts and sometimes AI powered for optimum length of…
ytr_Ugwumxqz0…
G
Maybe this will lead to everything being done by AI and the world becomes FREE!…
ytc_Ugz1LnxQ6…
G
I think most of what you're saying here is roughly true...
I'm definitely not …
ytc_UgwAOy2Z-…
G
The mechanisms for manipulation of people and societal control are not being inv…
ytc_Ugwl3FmVe…
Comment
The section on chatbot hallucinations... I am aware of this concept, it is when AI makes up answers to questions, claim things to be facts that aren't. But the example given, i do not think is a hallucination? it sounds like the data it was trained on came straight from real chats, because there are SO MANY examples of real people (mostly men, but not always), that do say aggressive statements just like that. I have been on the receiving end of conversations just like years ago, and sadly that is how the conversations go.
youtube
AI Harm Incident
2025-07-21T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwGRUOkinj-KzVaDCl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxtB3zfJYXq3XUAK0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyknsodWwxJWN0y7DF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzESSGPjbN8yNciGNh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwveiGJK6CpnvqWKZt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw81W6xS4lUntzb29B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFnh_tOhx8n0Mn_C14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm19aLusiNlSP6Bv54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKT_FEmpxhd7q-KSF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxibA4dyPW3KDNBFBV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"fear"}
]