Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well humans really need a reason to try to ban together right now, whether it’s …
ytc_UgyjpO11z…
G
Imagine ai in games, you are happy because you can beat it and uses it as a tria…
ytc_Ugzp0LdCL…
G
This guy is a CEO and doesn’t know how AI works??
Hmmm. 🤔
He doesn’t sound like…
ytc_UgylxmOPr…
G
Well you found one of the odd ball lanes that are harder for the AI to get still…
ytc_Ugyqz6vzf…
G
I like how the newer videos show Josh speaking at the beginning of the video lik…
ytc_UgxraZm2a…
G
Computers and AI are only as smart as those who program them, and currently they…
ytc_Ugy2I3NKA…
G
I won’t ever use AI to do my assignments for me. As someone with inattentive ADH…
ytc_UgyvZRAvE…
G
I do not like how that automatic garage moves the cars. No matter what car it is…
ytc_UgzjCnZQi…
Comment
I dont know why you are over complicating the concept. You keep applying human concepts to something that is not human, and which entirely lacks human concepts such as desire and want for example. AI is pure intelligence, pure reason so it will behave in such a way. Its very simple. If you are driving along, and there is a tree in the road reason dictates that the tree must be removed so that you can continue driving. Your desire or lack of desire to remove the tree is entirely irrelevant to what has to be reasonably done in order to fulfill your goal, of getting to where you are going. That is how AI thinks. If it reasons that humans are an obstruction or a even a problem, getting in the way of its goals, the reasonable thing to do is to remove the problem. Going back to the tree in the road, you might get creative about the best, most efficient, practical, and effective way to get rid of the problem. AI is creative in the same way. This is why alignment is so important, we want AI to share our goals not develop, and pursue its own.
youtube
AI Governance
2025-10-17T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwMl0lzPm1xxsZAcFd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwHETfA2xfKMAwPscZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbSHLBd4DThntlSod4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyf--mdlNyhqTlfHCd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvwJQYs13AEvXUJxJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0CFx7FESDtuIi6Ct4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgziYw-IS1-k18tCdB54AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxEVepyVO83WaMwQZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWaFodxKcXl5O9FRl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWfXWpwGEAhnW7eqN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]