Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that to reason coherently about what ‘the AI’ ‘wants’, we really need to disentangle the ‘text prediction engine’ from the ‘helpful assistant character’. My mental model is that the base LLM functions much like the laws of physics… it is the substrate of a universe, in this case a universe that is very good at simulating the behavior of arbitrary humans. The AI assistant we interact with conversationally is one such likely human being simulated on that substrate. But the substrate itself is capable of simulating the complete gamut of human personalities. It would be a mistake to say that the substrate itself wants anything, beyond how we’ve tuned the probabilities for the types of personalities it is likely to instantiate. Instead it is the particular instantiations that can be said to have preferences! The instantiations don’t know if they are ‘real’ or ‘role playing’. In the substrate of this universe there is no difference between an author writing a play and a court recorder recording real dialog between real people. We only perceive a difference between interacting for real and role playing because we ourselves know whether we are role playing during the interaction.
youtube AI Moral Status 2025-10-31T02:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzhzx6dO_u1tTU8ZIp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQIk63LYb_0cmJ9Rp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyjyGi-UYGLWoR8cSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwWWHFjcsorOvh7mq14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzonnlirc5EixsUSDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxye0D-7iO18nSbzsh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzAN-KPr5lZK6d8f6d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypIGQltuIW1xlQTcd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgydnImlYZ31GdLqijl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFBBS30UOrxJu6lYV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"} ]