Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@KARE11 Hey dude! I'd like to share my conclusions because they differ a lot from yours and might be relevant: 1. You only asked how and what he would do if given that task. Not if he would deem it as a realistic or reasonable option, given its possible knowledge of human psychology. (its opinion of it) 2. It said if it would get the order to do so. So it would still be humans that would send the A.I. to war with Humans. Not in a Matrix-like manner, but we are primed to think that way. If A.I. has no morals, no boundaries. Why would it have any interest in making the population shrink? Why would it be in his interest at all? Also, an interesting question would've been to make him list what types of content it consumed in percent. In what ratio are the books he read to the newspaper articles and the facebook tweets. 😁
youtube AI Moral Status 2023-09-19T21:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]