Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@imthinkingthoughtstake the example given & remove the bits about the verifier, controlling the AI, or any of that. Just, we create & improve an intelligence which surpasses ours greatly & has goals. The emergent goals we can imagine it having are things like survival, resisting change, & acquiring additional freedom of action. Just from that, it has reason enough to eliminate humanity, on the off chance humanity might try to deactivate it because it is doing something it senses they do not value.
youtube AI Governance 2025-01-08T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugz_yKuoKnTOTwpefPt4AaABAg.AAiaptz-j_1AAmOMFZF9M6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugx8g_5sHOdatpNS1ql4AaABAg.AAiaMK8MSDxAAl5UmKk8NB","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugy6CiuOR6gKecGxKsd4AaABAg.AAiaGgR1H05AAkEW-cmu-_","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_UgzCvXXYuGSCHhd8t-R4AaABAg.AAi_R3e0iIkAAjHps0E2Kq","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgyJ8K6sL0wfC6SMlq54AaABAg.AAi_DdDRj5UAAl8z-LKwiI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwePVVbMUGmOuwAgch4AaABAg.AAiZnSWW2klAAjIytb5cUL","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwePVVbMUGmOuwAgch4AaABAg.AAiZnSWW2klAAjJ1__3PS3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_UgwePVVbMUGmOuwAgch4AaABAg.AAiZnSWW2klAAl3hJVF6k9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwePVVbMUGmOuwAgch4AaABAg.AAiZnSWW2klAD2JjEBDIcN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxWbWU8HnwXTGqsyxd4AaABAg.AAiYeR1UICtAAigX-3Eus1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]