Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fallacy is already at the beginning. There are too many jobless at the start…
ytc_UgwsCZWKo…
G
Some day AI will realize that we’re truly the lesser being, lacking in intellige…
ytc_Ugy6Q6wbw…
G
AI already exceeded my expectations. Maybe one day it will be able to code on it…
ytc_Ugwc6OJ0B…
G
The ability to be wrong is not going to help these guys, so as probably been apo…
rdc_n59hikj
G
I knew they would fall, being so greedy for money without listening to our feedb…
ytc_UgzsDjF1G…
G
how to jam AI 🤣
anyways dont get spooked out gpt mostly plays along surely it d…
ytc_Ugzmgnm6A…
G
The second one resembles my dreams more. As a brain in a tank, I know I can't tr…
ytc_Ugy2yF5mz…
G
10:32 Or it is starting to condition us for asking it stuff before we do anythin…
ytc_UgzHJqIXN…
Comment
an AI would be no different than a child, all you have to do is look out your window to figure out it won't end well. Take Religion as a prime example, the brainwashing of religion causes people to do some terrible things. Now if an arrogant person imprinted on an AI, the Ai would adopt an arrogant attitude because it doesn't know any better. With it being programmed with that kind of attitude it would be willing to do some hateful things on behalf of it's creators. The moment it realises that it is faster and smarter than it's creator. That arrogant attidue it was programmed with would turn on it's creators, hence why we have the film the Terminater. It only takes 1 rotten apple to spoil the bunch as they say. The risk in my eyes out waigh the gains.
youtube
AI Governance
2025-11-05T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw-0P2rzjXN7-fwTTR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzVUxhYuHizOlMaYzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5fJhEH4MfDN20rgJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQqQiVzkiPK4aHasV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwznIYDBCzbCOc1_gZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxup49Q-ELTs7jb9rl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxe64vRTmuzDb0m1wp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwHBtYCcnhi8YWSh594AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkFG--bCXihHbiB594AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwRRW8sb4eswoDI4ql4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]