Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Philosophically and scientifically you are making a bunch of leaps in logic. We already have machines that surpass the limits of human intelligence. So far we have not been unable to make a machine that desire anything. I have not seen any suggestion that we would even be able to. A computer could be trillions of times the raw "intelligence" of humans and still have no desire. We could easily accidentally create machines that destroy us but that does not mean the machines are self aware. So before we should ever ask if robots deserve rights we need to ask if robots could ever desire anything? The most advanced AI on the planet may be able to convince it is human while having no more will or self awareness then a calculator.
youtube AI Moral Status 2019-08-14T03:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwPxg6X6eYcSf3MNVl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw_Do7TEPRmKjfC_IJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwu475XbyBNRR7DQzZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzei0lpJr7UXRTL8nx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxcTSss5Zr3t6EVoz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyOwpvMLnjdaWRhlVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztoeP-SEHVnbKHfWh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyvkIO6mc0x5c0jxsp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzTJh8Z_amz7izsx8t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzj6R1otD4Ujgt1hMd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]