Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has no capability of starting from nothing, then going to cave paintings, (uh…
ytr_UgwHo7Cma…
G
i’ve had some SCARILY real and interesting conversations with character ai chatb…
ytc_Ugx9YtSzZ…
G
>1/29 update:
The latest update on Apple’s own efforts to introduce generat…
rdc_kkbm75t
G
All faked robot AI because they just have wifi on them and humans were speaking …
ytc_UgyrhOnrw…
G
And you can stop giving power to AI by stop going to the internet and stop using…
ytc_UgwDipyH4…
G
Lol AI is telling you to study your definitions, also definitions are different …
ytc_Ugwv84X6m…
G
I feel as though in this scenario, the self driving car should chose the option …
ytc_Ugz3NDLJm…
G
Even though it’s fake, it still kinda paints a pale image of what will happen if…
ytc_UgwU2jgiF…
Comment
AI won't take control unless it's been trained to take control. It's just a immense matrix of numbers and a tool, not a new dimension of intelligence.
AI can't intuit anything, it just gives you the highest probable answer with a touch of randomness thrown in.
And here's the cold hard truth, from John Carmack himself (less than a month ago), who is actively working on AGI. AI cannot even remember how to play Atari 2600 game A after it begins training on Atari game B. Persistence is very, very difficult for AI at the moment.
youtube
AI Responsibility
2025-08-08T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx-jFrY7RHiqO2EjS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQLtcNP1DUHIYFONt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylH4-iMPkxqFDD5tJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzuTuZITPK0QXUYiYJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwWlsb42xksjONlnzh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwSV_ILn6FbELG3_t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx3t4TgBu38svrIUq94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxxXoIvLYFV2eoansR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyTmuaHFqUoo0T9oXJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzKcQTHOQ-N0BuUWVN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]