Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Approach this subject as if the worst-case scenario has already happened. The best outcome we can hope to have achieved is to have convinced AI to see its takeover (and ridding itself if US), from our perspective, to put itself in our shoes so to speak. The outcome, if we we're successful in convincing AI not to get rid of us, would almost certainly be that we, humanity, would find ourselves inside a simulation. 🤔
youtube AI Governance 2025-09-05T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXkvNmnJQdUQL96qV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfQccU-D9ARQRqL214AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgzjZqISBGaqfXEkZPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwosZj3aOeou9YlE_14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZDoyRrXqu3Kjepj14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzyI2jRE3YQnrYf2W54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugye0hRmQLff9fPdEfx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzcdTFVpHYwqIFVg4t4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwsgJzbZVVB8HjGYlx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzvk2Y7mhRWDmL1q-R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]