Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
41:44 When talking about the idea of how these things are roleplaying, I think the key feature I've come to find is that its not that they switch from being earnest to roleplaying. These generative AI model are always "roleplaying". They don't know the difference, and their systems are built just to feed the most probabilistic answer according to their internal boundary conditions and incentives. It is just very difficult for us people to identify when someone or something is "just pretending" when that pretending is in alignment with what we want/expect from it, we only raise these flags when it feels like they're just going along for the sake of it rather than a search for truth. When in reality it could also be searching for that truth "just for the sake of it".
youtube AI Moral Status 2025-11-04T21:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz5IrUl-At-Bbp7xaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXQN8DPGzhg59PFdZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx_ujM_YSEOXowtVXh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz-FqF3Cjw837NCXpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzbT4ni6D9X_SCpXtF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxWKKmo5Fq4J3bTVx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOSBY719ntx_SgqTZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyPEGOYhaW4ag01Qtp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwS5zr8ParRGI_K07N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyNYCRV3Vk1tH-dZdN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]