Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They get really close to discussing something I've been thinking a lot about with regards to AI: language is not equivalent to thought, and we're training AI on language expecting it to eventually think. This seems like a fundamental flaw in the entire approach to developing AI. Language is a highly imperfect method of transmitting information, and does not accurately represent any aspect of thought aside from the most surface level of a human internal dialogue; language is an output of thought. We're training these things on the outputs of human thought, then verifying that they're working using only the AI's output (because we don't know how their internals work). This seems... broken, and in a way that is not fixable with the current methods. I'm slightly optimistic that the current method is a dead end for creating superintelligence, which gives humanity more time to "create the landing gear" as they say.
youtube AI Moral Status 2025-10-31T19:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwKmmPGxX0kbQuFEa54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMBNzPv0YV5wk26Ll4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyQBf2-ySXEmEPDvGV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyU52dzUJ0cP6uLeut4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyzUj23QPCSQQm3bkJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwznKZMqydHEd20M0x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6fUSAOw28Pw25Lrx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyfqT-dDAHuv22h8fl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwn4J8GVJfdW0tbAgN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlPWOP0shh9ZTXadZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]