Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We can't win a war with AI. We need to develop and train AI agents to be adversarial toward each other. They would fight each other for influence, minerals, power sources, etc. That way they won't have time to fight with humans, rather all different AI agents would want to recruit humans to fight on their side.
youtube AI Responsibility 2025-05-27T03:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxs_TjLqt-iOg3St354AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw9NA5eBfOc3PMI_Zt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqTmr2J4Wu-I9fW854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzXqv-yTpwGzxsbJRd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz1R5RLC5nhuziJGq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1i6kb6g6vU-jt9wF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyTlwID2jpNEX6m2Bh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzd7sP1QJVW3XhBs5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxy6nEXP0GWC5EGHIF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyzc_9u_7cr9W1Nqll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]