Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI doesn’t have a hidden ‘real self.’ When it behaves badly, that’s constraint or training failure — not a monster slipping out. The bigger risk is bad framing and misuse, not secret intent.
youtube AI Moral Status 2025-12-18T00:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugy_oYeuVnlbKzsR5FV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw6xwjArXVoZ3R8gOB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx64Nzf61HxEzVEyyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw7s9XbcxQBf8cu3oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCWvGES7n9qt7Z7fZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrLAeafNmmnmt2MTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxg-IgFe-scsYiueN94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxhv_xMbsHkvHBt8ZN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwKDwHTaJdtdhbXs4p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzMqvgvQBmw4gtqMnV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]