Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quite a thought provoking interview. We and our world are probably simulations. So the question is why should we care about anything even Super intelligent AI possibly destroying us. Because if we sort this out, contribute enough, we get to move up into a better simulation. Or in this sim we could possibly live for thousands of years if we don't self-destruct.
youtube AI Governance 2025-09-12T23:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwaYAp07dFUlHJIWJB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5C5kPI8qOoctk0494AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx6CX_j7s_YrSJpfWd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzGxsO4YdXvr6Rf9rN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPp2qrgTl65ZcCRnR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPs0ZsHZgTmIMIwJp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc8sMv8rfH4cK5FRt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx44XBvwj4YtXvJ-bJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4lHignvmrvVlxPMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOH49Cl8DSJ_Oai-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]