Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These endless videos about AI safety, while sometimes interesting, are nothing but hubris. The idea that we can control an intelligence thousands of time greater than our own is silly. Is ASI dangerous? Of course it is. We could have assumed it would be dangerous before we created it and not done so. But to think we can keep it in a cage, or aligned to our desires, is just arrogance.
youtube AI Governance 2025-08-24T13:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxRMFRUam0CA2YoZ1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuEcHujOxS0fHOwcJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzcI2zdfWJ4NpTHw7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5Lc6KDtmpA0Aijnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJoahSIZUrX6W7xhx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyA65uBpwx0OPX9NdN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy-K-ZN7YpJCwHbLTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxu6it-4sdAbovVIlh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwnBsozd3f7bk0wQ6J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyDhNYmMTibP_QSiP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]