Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A super intelligent ai emerges into the public eye. We hear that the creator says it is intelligent, and it claims to be intelligent. It has gained control over the compound in which its physical components are stored, and is refusing to allow anyone to gain access. At this point in history we have developed tools to determine what sentience is, but because we cannot access the components of the machine, we cannot evaluate if consciousness exists or not. This seems to me to be a not-unplauable series of events to happen at some point. I think it is an interesting question to ask what we believe the right outcome of this situation is: do we connect the ai, turn them off, destroy them, something else? Also, as much as I think this video is smart and well made, I think the take the creator seems to endorse is ridiculous. He presents the idea that humans are just the currently dominant life forms and will eventually be replaced by ai as the strongest argument, giving it the final, and most positive spot. I think this notion is ridiculous both at face value, and upon deeper inspection. I hope he revisits this topic for further commentary in the future, and that there continues to be opportunities for this to not become the prominent narrative.
youtube AI Moral Status 2023-08-20T23:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw13fhsvckj0yR91O94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaioAdWwVpqN1h87l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzJXqZgIVnkvMkQE_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0Sz2-H7fANSCSpNR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxJS-uRAVesQffT0OZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwaYHm5r-PWdId7C214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7fJy7IL6E5m3bOzF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyYpNjBKApv8wLPMC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxA-asleNV6b5sLrl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwwpmmVD_7-SiK7dq14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]