Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real philosophical question here is not whether AI can think like us, but whether thinking itself can emerge from simple learning systems
youtube AI Moral Status 2026-03-12T08:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz8FVVN2wOeAsjky_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyLdu88Hs8Gjw43CP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxMu8mqCIvdxD1TF354AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzryCEm1HL2-hdOL-V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"hope"}, {"id":"ytc_UgzBWWNVVFPNjpLaOW94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwb8gGHZP7apVonVoR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzptPnUZAOlnNre7Zx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwPS7z4kQy0ckwsK_l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw7oYMEWsX9-bgvI9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQcFQxjRYwv2BwBYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]