Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution is create a vulnerability in every damn AI model with a kill switch. I mean you don't stop development because there is a perceived danger.
youtube AI Governance 2024-06-10T20:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwZ7hxaUJM3pd1t3Hx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw3RaMFdLJpn6UIc1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9YsjJbrtRChkCwKJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzpgOLISLcAa-dswZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFseQRVJisc0cLrEt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyAxluPgTpHr6tUNuh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywIj0U1Sp6QNu3dCh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugyj2Mh2NGWwJirqeQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyP6P7JXoNGKwUWaIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyuSu94GMAHgUU4ptB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]