Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My question is, if Dr. Yampolskiy believes we already are in a simulation, then why even have this conversation? What is the incentive of our experimenters allowing us to live forever? What is the point of limiting or worrying about AGI if there is a high probability that AI has already won and we are just 1s and 0s in its simulation?
youtube AI Governance 2025-10-02T20:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy9yDckNigTAhQ8ZyF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMJRV4BHdwHHpovgh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxdCqS6lJwVvlJJk5R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUK-34_pa798aYlYJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwBn1peIePKgkUuPgV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx4zaLw2Sa-kSPGfrR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyK_Xx76SWP-7Nn2CZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwui3xxrQcTbjBavP54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwppQue7KjNszMqfzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxh_bnOO0JB1MCxAWJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]