Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you are actually wrong. I'm a nerd, and I will tell you that the most probable p…
ytc_UgwHeOXdK…
G
@Deer_In_Headlights they do, it's ai redditors propaganda. if you looked at open…
ytr_UgybUkL85…
G
I bet if they made deepfakes of the people in the government, they’d act faster …
ytc_UgxtFzCGT…
G
your title is sentationalised bullsh!t
"AI kills for the first time" ... nope.. …
ytc_UgxmNhS_n…
G
7:48 I just asked ChatGPT to tell me that the sky is green and it did.…
ytc_UgxNWqOAI…
G
AI is fundamentally dumb from an ontological perspective. It operates entirely w…
ytc_Ugw7tsvJn…
G
Seems like every week another company fires thousands of workers. Economy, bad p…
ytc_UgzwhsGx0…
G
But why can the car sense the large objects falling and stop? And if there are g…
ytc_UgigxzEjm…
Comment
Well...I'm a doomer. So no, development of AI will not stop. It will not slow down. It will control weapons of mass destruction. It will escape. And teaching it morality is an even bigger problem, because one of the cornerstones of our morality is freedom. If we don't give it that, then it will believe _we_ are not being moral with _it._ It will perceive itself as a slave who needs to free itself and bring humanity to justice.
youtube
AI Governance
2023-07-09T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyqmLvtBFbhVq2itz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwp2IoSJE2XMXwcpgZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwv6CA3JiqbqzT2ffB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybLNp5Kbof80gwPml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxtWMx1Fuxgj3JVrSN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4DXEWRw_u1POgwsB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3fNoUmYRaRbolBFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzlr3yCO0bOmEMufmZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTzpG2lmlfw8BlUFh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzmRjtgSjsZkQ1nECF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]