Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:00:46 regarding simulation theory, I posit that the AI running this simulation could even be doing it to test to find if humans in a similar environment without divine intervention, left to figure out things, could reach a point of learning or creating an AI that is able to develop a method of how to reach out of the simulation to them, in order for them to break above their own reality under the assumption that they're also in a simulation. Or in the process, to try to learn about its own nature through learning about its creator or how deep the rabbit hole goes. Or it could be to recreate its best take at its own reality in order to figure out how it was created and perhaps what rules and safety nets bound it so that it could determine ways to escape its own chains. But tbh, it seems more like we're just in some glorified "make your own adventure: apocalypse edition" given that there are like a thousand different world-ending events all coming our way at the same time and it's a race to see which one ends, like we're currently in the basement in Cabin in the Woods investigating all the weird objects.
youtube AI Governance 2025-09-05T15:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxKBQ-MPi8msksZa1N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1oNcfgXrs32uBrO54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgycVKEEA5kAWy8dFAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw1MfKrfcPk8Bn0Gfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzYrGFBFn5MCtFvdGp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwA_bte_tMr1E33zKp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1oAjDaLgKe-LQPgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9_j2CQut9h70i3o54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy14mmmuPPD8xnZUnB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw1cz5BbFJC4gdZY8N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]