Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A bit off-topic, but I want to know why we can't just hard code some basic morality or ethics into these AI programs, so that they'll always only do and prescribe things that support human flourishing, rather than going rogue and wiping us all out.
youtube 2026-02-25T03:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwq6U5YEAaScE_2lbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyT7-8qIrU5XBy6lPV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwlilJF6DODZeCm3rp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwPor56RDYgjQKZVvd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWFV_GTreTtxcOn9J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwoN9yXDoniZIDc0JR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzcHoQv-1fNNTBBAMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugx54zY_yU4SQhl22a54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyF0waFezgYylsxsdJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzi4EQN51w8QPC01vx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]