Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The government wont allow AI to take over.... Who's going to pay the taxes lol. …
ytc_Ugy84N6Lm…
G
It is high time to integrate something like "Artificial Wisdom" parallely for th…
ytc_UgyYDYns_…
G
I support AI, but I support ai disturbance even more to stop idiots from being w…
ytr_Ugw9ZJilQ…
G
6:29-7:25 "I have also heard people said, AI takes inspiration from references, …
ytc_Ugx2lVGbE…
G
My account's age and posts are irrelevant. Worked as a GIS Software Engineer par…
rdc_kyzk1oo
G
@andrewbrakey6214 you mean AI coded for you right? This is the problem with the…
ytr_UgwbIwKxU…
G
Read the report AI 2027. People comparing this to technology of the past just d…
ytc_UgxuZCMio…
G
well maybe artists actually put work in their art and ai steals it also stop bei…
ytr_Ugzib4KR_…
Comment
Another piece speaking as if LLMs are capable of thinking and therefore emerging malicious behaviour instead of simply outputing contextually fit text responses, something that is exactly—and strictly—what they have been trained to do.
This is exactly like thinking that toddlers have an accute understanding of causes and consequences, and then diagnosing one with suicidal tendencies for attempting to jump from a ledge.
youtube
AI Harm Incident
2025-08-23T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw49cVqOfT6ajGEGqh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgytLuZ0fOEvJkmFosF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzsTRcrUjNp6M0a3Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwpnXodS0Gj6ItQ0Ct4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxodhSWm1oxA4AgeIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7kWm0aEWBCDK2vTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw0rNz6NeN2fBmT20p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0VesQ_qDJmEl-fm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-0cJqCxdUKNDa4I94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlatGFFA252h7PGyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]