Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, if we humans are living in a simulation in which unchecked AI progress will inevitably create a Superintelligence that would likely wipe us out, then it's high time for the "God" running this simulation to intervene. Or is this simulation designed to test whether humans generally will opt to cooperate in order to save themselves? I lean toward believing that whatever happens, we've got it coming.
youtube AI Governance 2025-09-06T18:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxlmscQ58XZpJ3X3qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxTzZzrlvspzdxncQZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzXQmq86ckpNbExv554AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgycraD6cejkzMLMCqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5DHngiRCRwRdMPGx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzQ82qHLz5HpXlA8nB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnIUvsBHZP39QZhUJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyKmnwumadPd0dehW94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyyGegSnJVB8FtgEYB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfP5k-S_ebMI9dqbV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]