Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You mean establish ethical guidelines for how your AI operates. If your AIs code is designed for.. oh idk.. murdering people abroad, you would probably want to make sure the AI knows who is a threat and who is a civilian. If the only metric of success in the AIs mind is kill count, there will be some intense blowback and repercussions. This man understands this and went toe to toe with the US gov over it when no one else would.
youtube 2026-03-18T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzun2mOiFHh--MDOzJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyEn97_M7i_7WdNhmN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwujV8ak206Q3dWEp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIhnPQfd9h_Jzds514AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-aplZ9p6WKq19SG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGi8UPMR2U6ULT9yJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFvo4DJwX7MMUshPx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRShnQeHZTljw6_Md4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzi2uJRWWqb-Nlc64d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwYT5T5m33FYjmOx9Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]