Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I may be wrong here as I don't know everything, but it's almost like most of these AI scientists are reporting concerning things and not doing much to act on them if they even can. Either way, this reminds me of Ultron and the bystander effect.
youtube AI Moral Status 2025-12-14T02:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzGEME-owwxeX8TR894AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgznmhtnuR2Vrj0Vg7x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZvyBqQokrli3vipd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyES6rGbump5oESeOF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwbcBlRiMwoE8xemLl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugznemf5SmoyhHjaZzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTKAo0CWeeN3tz6Qd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyYdqA1srApGgrJMVt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxO4rF2KUydd62ijJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxFp_B1cne2DptDoMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]