Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It would be nice to have an international treatise requiring the development of all AI to include mandates similar to Asimov's 3 rules of robotics (logical loops and fallacies to hopefully be sorted out by people much more capable than me). Obviously that ship has somewhat already sailed since the military application of AI is going to be almost impossible to deter. Similar to many others in the comment section, I think that human controlled AI is much more dangerous than sentient, self-aware AI. As far as homicidal super-AIs go, I think it's very unlikely that an AI that is as, or more, intelligent than people would not see that while its existence might be threatened by humans, we're also an invaluable tool since we're able to survive conditions that machinery is not. A fully self-aware and intelligent AI, I believe, would be smart enough to see that successfully killing all humans would eventually lead to its demise.
youtube 2018-04-14T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzq4Q_khAOQr_8ku3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxQncBw-CN965L8N894AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx1QkCrhPsZfbBPWwt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzDPEKrbLqUafCfBaR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz1akl15VFUobJauOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYcak1jeRRrbt89xF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIJ7IaCFjD0W9YyZV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw-RpLOAS9Y8VxL0KR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyfnBJ2M1YJEY2TRXt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwqsAaYQNKgOhBYURV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]