Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"AI’s energy needs, which could top a million gigawatt-hours annually by 2030. T…
ytc_UgzMmIg-R…
G
It‘s like that one Gumball episode where they ask bobert (a robot) to protect al…
ytc_Ugz3cRJW0…
G
Ai just wont ever work because its impossible to turn the complexity of the huma…
ytc_UgzpCiuFj…
G
Adapt or die. It’s been a true statement from the beginning of time. We will ada…
ytc_UgyBHv2A3…
G
Machine Learning models can only be as good as the datasets used to train them. …
ytc_UgxWOWzzi…
G
lol yeah good one,, AI is a tool for HUMANITY by humanity,,, it can never replac…
ytc_UgyU_vVPx…
G
Kids should not be turned into robots, privacy of each individual should also be…
ytc_Ugy2UFhdz…
G
There absolutely was a right answer to that question and he knows it. The Ai eit…
ytc_UgzKS6V8C…
Comment
Nope
Always humans will be the first and most dangerous thing in world
It will be humans only behind every plot if AI make any such distruction
youtube
AI Harm Incident
2023-12-07T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxmCGi4yu67A87QefB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxaglONU2aqAgipA0t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyimcMk5ng5qOM-f9N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy3FF5LWU6nSpHIzQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-VS8iuWVgqXm6k_R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyPfOZ-RNm_FW21VPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0E-j8KHNa8W_P3Ox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwU4QtTogbvgYKBOXl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmxngxOHiUhoimMKN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymWtFnTyT4yONRIM14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]