Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The vice president a lawyer who's not so smart holding a conference in AI Is the…
ytc_UgznN82Ks…
G
The fact I’m seeing this after the Supreme Court determined that AI Slop isn’t c…
ytc_UgyRC2KDw…
G
It’s only based on information the AI has access to .
The creators can block or …
ytc_UgxGKQVKZ…
G
They just won't take your jobs, but also...
Drive your car
Walk your dog
Clean …
ytc_UgzI4wRq7…
G
This is so sick. Why someone try creating smarter than us. No they are not us. A…
ytc_UgwyRZnyi…
G
Computers are NOT smart. They can only do exactly as they are programmed to do. …
ytc_Ugwgf6vBY…
G
AI is emulating human speech patterns. It's not actually comprehending, it has …
ytc_UgzGJ6kSw…
G
@mooseboose656 Wow you just made up a quote and attributed it to me! FYI I work …
ytr_UgxvYOHh9…
Comment
Sorry for Mr Penrose, I do agree with him that AI is not conscious, but AI doesn't operate in absolutes. There is no "truth" for the AI. ONLY approximations. So it can and will absolutely make new stuff that humans can't. It doesn't understand truth. Or why. But it can absolutely measure stuff. For an AI a thing is true if it knows about it, and from what it knows it is more true than untrue.
youtube
AI Moral Status
2025-05-07T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwy6K31YktADf2YCXx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwsIy_AG56tBSDOW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxq-MHp4fDD_mmbDed4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWK_0YCyhmJvivjK14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYjS4XgevAfs89pft4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzEa1LfMMLEN7x6dXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyaTup6-JqZ8IgkESZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgweYzH9xHDWlQxIslx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxgxBZzSYA4HFz9MN54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzFbwZRF-o2NC1chh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]