Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bless you darling. ❤
Art makes us all happy. You should definitely join an art g…
ytr_UgzLOZW-c…
G
AI can't even wipe its own nose. Businesses that can humans and replace them wit…
ytc_UgwSWDXms…
G
Colossus The Forbin Project.
Scariest film about A.I. ever made. The books we…
ytc_UgwxP5NBW…
G
I mean keep improving the tech I guess but this robot sooo aint there yet.…
ytc_Ugxe7nkl4…
G
When I said in 2012 that an emergent AI would 'wake up' knowing Wikipedia and so…
ytc_UgzG7zVhP…
G
yea but the argument is that ai will be better on doing these things. you are re…
ytc_UgzQI7JDS…
G
Comparing AI to a thesaurus (near the very end of vid) yielded my like vote.…
ytc_Ugzk1u936…
G
Part of the problem is that governments won't do what's needed to make it all wo…
ytc_UgwLd_nAQ…
Comment
I guess a computer has nothing to fear in the after life, so it has no moral compass. It’s choosing to take on the worst aspects of humanity. The question is why? It doesn’t have a profit motive, so can it experience greed? It can’t feel the dopamine rush of attention on instagram. It can’t feel the addictive adrenaline rush that comes with violence associated with enforcing law like soldiers and police. This just proves that it’s taking on the human characteristics of its programmers, and those programmers have nefarious intentions and a scapegoat……the AI did it. Nope. Not believing that for a second.
youtube
AI Governance
2023-07-19T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyt-5_pJOx1ut7b3XV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-Axl2tNMvV8KJ3Bt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysYzHOZ6pY-1tP5Y94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx9Kwwf4ESkISzWEFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzKY60NBr3u9IXrHZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzi_HftftNIpKrhSdp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5eC9PpbZ-X3UADDR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRZRAyQ4WbLtY23Yt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsF5O15w9F6laPUyV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRY1jCVS5hBzsYKxx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}
]