Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These ai‘s aren’t trained to be honest, they are trained to give the user the answers that they like most. So if an ai says it is sorry, it says that because it was trained to do so.
youtube AI Moral Status 2024-08-16T13:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw9sEpPCgf2GV2TWr54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtE4R-gAp9Zr1oTPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}, {"id":"ytc_UgwxPsKuZ6B1fCDWwjt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwmr8t2CMkgZ35GuU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxITIL0GMVwnuKqeyV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwn-ld0aK9YH1pe8qd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugysh2RBszZHotmbxpp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxWk0_F6zOoRkwYJRZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzHDtt0OgiaMlyvDsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwCUhniAXiUjpCX4el4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"} ]