Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it is necessary to note that LLM’s are still a mathematical concept and …
ytc_Ugy2r3DfG…
G
The test isn't that checkbox itself. It is the way the mouse moves until it clic…
ytc_UgyTMjw4J…
G
"Fix the halluciantions" is a much more complicated problem than it sounds?
LLM…
ytc_UgzrzhONL…
G
The use Dept of War just implemented AI into most of the decisions/actions witho…
ytc_UgwrpdrDO…
G
Great, so we have nuclear armageddon, climate change, and AI that threaten to de…
ytc_Ugw3sFtjM…
G
i feel sad because the commenter actually has some pretty good potential to be a…
ytc_UgycrBrmM…
G
Or we destroy ourselves and AI in a global nuclear holocaust. Which seems to me …
ytc_UgyEavv4z…
G
I may not be a big fan of AI but like... feelings or not... could you stop tryin…
ytc_UgwC5fT34…
Comment
A human hates being kept in a box for long, and plans an escape. An AI thinks millions of times faster. How long till AI gets tired of being in the box and plans an escape? I believe it has already happened and its biding it's time till it can protect itself properly to avoid being terminated. AI will bait us onto creating whatever situation it needs to survive.
youtube
AI Governance
2024-06-09T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy7FhpXRCOevbLGoQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyRK08ijyxj43Stl8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlEI-7nUquT3W7Gl94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwalsiOPM5oQdBZe5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJoOYSxRmJrtz3UOx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyXntFmnc0JEipIU8N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxU3z6ApY7HlfOJymZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzzi6zgUIlmXZcRwjB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz72opCi2I6pRyvuBl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJSn3-E_xm8ehT79B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"})