Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I disagree with the idea that an llm using internal text being readable gives us any real degree of ability to analyze the behavior of that llm. It seems to me like information could be hidden extremely easily in text that would pass muster, but has some kind of information encoded beyond the literal text itself. If it can lie, it can hide information.
youtube AI Moral Status 2025-10-31T04:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwk3yYIJh1pwzxwKyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwi2xjqi-pdQPTTlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7-LkrL2fC3fUcJfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwp3_F1Gv3Fe2k-TyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzaolg_zLprYoPGCpp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxPuWEf9dSiucEu9ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0h810WfN94wnGoxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwNwLu5J5hQkzBQbbx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgygBs5NN5oRKAiUsEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyIvZPXfkqxUIAhCPl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]