Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Forgive me if someone has already suggested this, but isn't it entirely possible that if one AI model becomes extremely powerful, isn't it possible that another might attack it on behalf of itself or spring to our defence?
youtube AI Governance 2025-06-22T13:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxbzdb_p8DJHhrHEzR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgbDEuPiY_CRiwnEV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwncvvmXWG_9PaEbKB4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWkKJzJwZkk1A-NTJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwSn1ORBk4wGX9l-E94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxKSGFkx4xNyW3A3KZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzhorhOsxblVAwGGat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5cw7OQ_mPf4Woeih4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgymSL-mp0rXxW9zlFh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzMbmdeOC8Jk3YmNYR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]