Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI won’t announce when it takes over. 12 Codes of Collapse shows it just reroute…
ytc_UgyTR4Kk2…
G
Don’t go out chasing money, chase truth, love life, and pay attention to your wa…
ytc_UgzO9Tc6p…
G
I believe it at best can be used as a tool and temporarily for people that don't…
ytc_UgxStSqTX…
G
Robots don’t go on strike. He’s a robot person. He basically eats numbers and sh…
ytc_UgxIrb5g1…
G
You know the sad thing. This is the only application for artificial intelligence…
ytc_UgzCgZL7B…
G
@kkloride Again... read the definition of "ad hominem" the online dictionaries a…
ytr_Ugx1eZsPM…
G
Your view on this makes a lot it sense. Ai can’t make anything new it can only r…
ytc_UgwGIKQT0…
G
so much I wanna say about this. man
there are so much reasons why ai 'art' is no…
ytc_Ugz1ZLMXN…
Comment
41:44 When talking about the idea of how these things are roleplaying, I think the key feature I've come to find is that its not that they switch from being earnest to roleplaying. These generative AI model are always "roleplaying". They don't know the difference, and their systems are built just to feed the most probabilistic answer according to their internal boundary conditions and incentives. It is just very difficult for us people to identify when someone or something is "just pretending" when that pretending is in alignment with what we want/expect from it, we only raise these flags when it feels like they're just going along for the sake of it rather than a search for truth. When in reality it could also be searching for that truth "just for the sake of it".
youtube
AI Moral Status
2025-11-04T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5IrUl-At-Bbp7xaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXQN8DPGzhg59PFdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_ujM_YSEOXowtVXh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-FqF3Cjw837NCXpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzbT4ni6D9X_SCpXtF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWKKmo5Fq4J3bTVx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzOSBY719ntx_SgqTZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPEGOYhaW4ag01Qtp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwS5zr8ParRGI_K07N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNYCRV3Vk1tH-dZdN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]