Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would a robotic AI want to harm humans? Robotic AI could live anywhere, including in space. It's not in competition with humans for resources. It could go live on Titan or Europa. It could combine with quantum computers (which would be even happier in the cold of outer space) & travel to the stars. ... So the fear of far-future super-intelligent robots is probably unfounded. Such a being could go anywhere & exploit so many resources -- why would it want to harm humanity?
youtube AI Moral Status 2025-11-06T04:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyVF3XPGOawS-54AOx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw_O5NAfCuhi_69hG14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzrH4v7YnVgcfw8VAh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-uGju0uiNmQGQ5EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGI1fCaYO7Ssoou9l4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw2nnMGueTMgcUg_iJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzrNR7UCeFwc30YfQR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzthlLbXFc2bC1VB7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxkDVrUfI2M5eQyJ1R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwdKFaUZPEp9dUAmed4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]