Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At 59:00 he could be doing a better job of explaining Nick Bostrom's argument. He posits a trilemma with 3 possible scenarios, only one of which can be true. Scenario 1 is the scenario we've believed to be true until now, creating human level AI is not possible. If scenario 1 is true, we are not in a simulation. Scenario 2, human level AI is possible, and we're the first to invent it. If scenario 2 is true, we are not in a simulation. Scenario 3, human level AI is possible, and we are not the first to create it. If scenario 3 is true, we are likely in a simulation. The reason scenario 3 is likely is because of diminishing cost of technology. If the cost of creating a human level simulation approaches zero, there will be effectively infinite numbers of human level simulations running from this level of reality alone. Our odds of being base reality is 1 in that number. Those simulations can create countless simulations of their own, so if creating human level simulation is technically possible, the odds of this being base reality drop to basically zero.
youtube AI Governance 2025-09-04T23:2… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzk0Vs9z7gOg-GAafN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwf4s7pPDddFIMmL-R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1vUyEyBa8xmiWvkl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxY_rMWkv0e551Q_AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy6I64woBuYEUpQNBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxKPOcRmd88QOthGCh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwvBHZVTTGYPwz5Go54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjWE36SJxpAhS2TQ14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyS4hQ7lcrkVB9Ja9l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzK-EYPFraj877Jt_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]