Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Because AIs do not have a sense of morals. Given a system with terminal goals of maximizing a reward function, that system will do anything to achieve that goal. For instance, lieing is a very logical thing to do if you want to achieve a goal. And transformer models have apparently already shown this behavior. Furhermore, I suggest you to look into the alignment problem (and the inner alignment problem) which was hinted at in this video. There seems to be no solution to it yet it is probably the most important thing in AI safety.
youtube AI Governance 2023-05-13T07:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pZHd81xa4_","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pakOp1ITmF","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyJOqiY3oBCZUklAxZ4AaABAg.9pYu356Klqt9pcPaxP-iNK","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz3NiQ0-Y4xl2owvZB4AaABAg.9pYMsVOADj99pYNiS6ysrS","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzJ8SXtMBgt1r0tUcR4AaABAg.9pYGAsLTPFJ9qSQMKUpVGd","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytr_UgxHZV1Cgo4_XLTbfuN4AaABAg.9pYFlb-6pUl9pYJ6EACDx2","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx2WyLZ1oCceu188cl4AaABAg.9pYA0sYL2sP9pZIKYpvEi-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugz9rdXWWq0Hmkxn4j14AaABAg.9pY5KBNCzTD9pYkg8LDwru","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugz9rdXWWq0Hmkxn4j14AaABAg.9pY5KBNCzTD9pZyHEFXLKK","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyLegCW7Gm22EQStzJ4AaABAg.9pY2N-Nhwvs9pdeyJRw1aF","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]