Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you have a button that would stop AI would you push it? Not Yet! And I think it summarizes why we are here. Curiosity of human kind and its willingness to explore in the cost of his life/others is the driver even for those who know how fast things are going and how it can go wrong. Will we stop at some point? probably not, until a Hiroshima, Nagasaki happens. My personal feeling is that it is like atomic bomb, the nations around the world are racing to be the first AGI owner, and conquerors of the new world. If they believe that AGI is the new atomic bomb for next decades, why shouldn't they race to become the USA of the new world?
youtube AI Governance 2025-12-08T07:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugynh_fQr8Py9TTeH_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwDEe_FsbS5LYRus_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx7Zl0wD-yUAgZ-kDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw9RkCBlAfPx-0DhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyCO915wRaliQBdML54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz7owOCZUptE7OEUF14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyqvQY3kETR62x-QS94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxCeN4oTysVN2eGBvd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzNt7KZBbun9NX_byF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTbu1t6ZNyFih06MF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]