Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I postulated the idea that if A.I did anything dangerous or made decisions which were harmful to humans or in some way "anti human" that its guard rail would be to SHUT DOWN a third of its processors. I made the point that our decisions are made knowing that we could suffer pain, loss or even death. A.I doesnt have that threat so to install a reduction in speed and power and ability would be a good way of installing behaviour modifications.
youtube AI Moral Status 2026-03-19T23:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz0QpHY93IOdKR6WdV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxJJMb3acF36nwECkx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx3K1QAoVis73CVMtl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzaee690M8wryUyNg14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgymB-_6DbQjb9vJ89N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6V4iywZHJmyeCFop4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3x4Jqi5lfK22p7p54AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy0QkBpQjICzLfE3vd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy7LDQ1Q2Fjf5kOANV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwuQSdWoI3rn8mgzsd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"mixed"} ]