Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I see and treat LLM's as a "thought mirror". As in, I use it to game ideas and concepts and do speculative worldbuilding based on those ideas. The results have been quite fun and intriguing, but it is clear that it makes basic mistakes and glitches occasionally. No, it is not sentient.
youtube AI Moral Status 2025-07-10T14:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyVshU967lWNT1W5vp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxX5AuxhGTXFds5c1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyQSS19T4b8k9nVlOt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzb452AO7ltEr6mjj94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhQN9DeZt5ozPL7rJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzkys4fiGGHHPrTLdt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwOqiQeELXmTEyLOGF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8SsvWAGagM2d4Ee14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMf_8U9xkDX0tmBAF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJ5TOggBMe_m17RHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"})