Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If simulation theory is true — meaning we’re living inside a simulation — and Dr. Yampolskiy is nearly certain of it, then when we consider the issue of AGI or AI in general, doesn’t that imply we’ve already failed? Or rather, that someone long ago failed to prevent AGI or AI from taking over — since our very existence as a simulation would be the result of that?
youtube AI Governance 2025-10-25T11:5… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugyg6U2aA6M7auQodnR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgxKQUiPErPeb-aNtVh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugyi1GRimpd9fW7aX_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugw0q8J-LjqfP3vWEat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz9TMACLxJaV25b5Hd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgzTaQ3KyBMDLOO2eaF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyJnueEQ8KzVEWoXKF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxbW2k8OlXDWmGoXP54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgwElWTxLgiPk9MhxyF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwIz96HpTIk1y1cIA54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"]}