Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> Why have AI's been trained on data that includes some of the most imaginative ways for an AI to destroy and supercede humanity ever conceived? Wouldn't giving it extremely dangerous "ideas" about what it is and could be capable of be the FIRST thing they should be working to remove from the training data and reverse course on? You know... the data it's using? The answer to this is pretty straightforward. Current systems are extremely unlikely to seek or be able to effectively carry out an attack, while future systems are extremely likely to encounter this information, or derive it on their own. Therefore there's little to no risk in teaching current systems about these things, and doing so may help mitigate the risk in future systems as we can use current systems in deconfusion research to help us align future systems.
reddit AI Moral Status 1685650521.0 ♥ 6
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jmiu1k6","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jmiyavu","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_jmfrnw7","responsibility":"media","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_jmfyo7p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jmi5ky3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]