Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do wonder if one way to guarantee a more moral ai, like could you design a machine that turns off if it doesn’t “eat” and will stop learning if it stops talking to other people. Basically just mimic some of the evolutionary needs of humans to maintain the conscious checks on brutally logical thinking like “I can solve climate change by killing all humans.”
youtube AI Moral Status 2023-12-31T05:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugya7BAiWRosgXN8gS94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyaYBzMw0c_1c4RmGR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwV2XmQtYdciPGEpWp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIhjGjrS9CHTbEA3h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwW_gN2lJJxd3kStwt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxPkHrfyspAiaQP2-J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzzd-A2XQAV50MeDyZ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyOKp0Qx0j-9vkZvD14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLKp0IRvLt8bpDSFN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIm3PZyj1Iq9atSVZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]