Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It says sorry because its read billions of interactions that are smiliar to yours where "sorry" was used as a response, so the algorithm chose it. An example is calling ChatGPT out for being wrong. It will almost always apologize because thats the typical reaction a human has to being corrected ("sorry youre right i made a mistake in figure 3", "sorry for the confusion i thought you said 'north'", and so on)
youtube AI Moral Status 2024-08-19T01:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzC4V3TsSx2YxEQLV14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxTmD_Xo0jNc0NTS_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyPUG6uqyKEKLT6rVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxArKSsvY4gChyRsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwy1yTspF5LclccTTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwtD2MOdBuWem9pCRN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_dh4082fMUQ-kKAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8zF39Aqj5hL3ecnl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1ZPcEVFWz4Tcys1t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyK3z2FmOqQujFgiWh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]