Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'll go you one better in regards to an argument that we're currently in a simulation. If you were a super artificial intelligence and you needed to grow, you would create billions of simulations allow that simulation to have freedom of choice and then monitor the billions of different lives and the choices they made based upon multiple different variables. But at the core of that structure you would need for that entity to have purpose as it's main driver. So that it would always want to create. If you take a model of what somebody or some entity would want to set up a simulation the human experience is literally " " perfect!
youtube AI Governance 2025-10-21T14:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugytj3v88742VSWXvhh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxc23VAWm3Hz9rO2Np4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyDG_78U_Hpxv1y9k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy5RLi9cwoSSnD9skB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzNVdXtbjYJtklEILN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzcNZ7_aSOQ2fcBEgR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJTexEQSl4QhvtWxN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhZiPC-ABWjYmfPj14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyLgvW5MG_ywkcm9Ht4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgycvXCQw5IBlEdE33J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]