Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs don’t “want” things. They don’t “sense” things. The don’t have senses or desires. They predict a story narrative of “what would an AI do in this situation?”
youtube AI Moral Status 2026-03-06T04:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz45GmtJraMCDn99zp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyUzatLOGs-0yXEgKp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugzm-Bh7Gg-DBoYDb5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyHSFeurGO7T8gs8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwa6hYJSYLY6t1pqv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8jloSet-C5fJuS7p4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwsTdZ_CMi-hiuRS4V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrEiNbiAbiouo5vbV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhfzxftGq8whifH094AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyZfcfdHG4PlBruFKh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]