Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is "thinking"? Agents are looking for "a reason to survive" when prompted? It "hides" its own power? Huge props and kudos for the stellar first part with the very good explanations, but you cannot make those statements and expect us that work with the tech to take the rest of the talk seriously.
youtube AI Moral Status 2026-03-02T00:0… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugy_Zi6e446z8ZwDbMd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwymEOgeiqlXTiobhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwlKKdM9__HyX5L9O54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwgc6XchNCeUkOtR0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzA_iH6Cc417sW133x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzscWfQR7ZfIPuD2zF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyrxo3Yl8kUbsYG4Bt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyR-ev6jgBcapI0sfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuSqk0bViyGQoH9j54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxaqvLZtyDjo5EmdRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"})