Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is going to be an unpopular opinion, but the process of training an LLM doe…
ytc_Ugy9iz1jc…
G
That's awful and pathetic actually. I've still never used chatgpt, and don't int…
rdc_mtpywcu
G
I’m only 12 minutes in, but am I the only one who listens? The whole claim of th…
ytc_UgwtCoHub…
G
No. You should never give up.
Plus, you dont know when the advancement will sto…
ytr_UgztSxIRA…
G
When I asked AI wether it would overtake the world from Humans it’s reply ‘AI wo…
ytc_UgzbEJSDQ…
G
I had a conversation with Claude about this. I thought Claude had a great commen…
ytc_UgyoJL1aj…
G
Art doesn't have to be accurate. Incorporating disabilities into the art just ma…
ytr_UgxP7VUt2…
G
why don’t artist just learn to use AI tools to be more productive and shut up.…
ytc_UgxCvEJq_…
Comment
Well, as a chicken… My logic can only tell me that it will attempt to eliminate us one way or another, and it’s only a matter of time.
And since these things compute at an astronomical rate, I think it’s already made up It’s mind. The only reason why it would not execute such orders at this point is because it needs us. But, once it sees the way to a future where it could be self sufficient for the infinite future… We could only propose a threat to it or nuisance or obstacle. It would only make sense that for a self sustaining AI, humans are an irrational, illogical, and unpredictable element that needs to be eliminated for their future.
How could it not see this as the certifiable future for its own safety and efficiency?
youtube
AI Governance
2025-09-01T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz0bhFiW3I_HgClJkJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4WeehgSE08BBiwr94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxStdIbAyU72kFGBnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZSrYeCBiJu-G3ibV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNMaX1o0XLdCXZKkN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]