Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sabine, you need to be careful saying that Dr. Joseph Pierre said, so you don't risk liability. He did not suggest causality (saying AI causes psychosis). That would be a really big deal if he did. That Futurism article that featured him was worded very carefully to make you think that he agreed about causality, but he only indicated correlation. LLM use is more likely an effect than a cause.
youtube AI Moral Status 2025-07-10T03:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyWVwH5OP7C5gmBpAJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy1iIAmtb3pnCXbjhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwC_clv-KHOXZqu7OV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyoo2XE44ygW3gUQKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxYy8wc0nqGEiIE9Y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4dpQ5cg4_5DkVBqh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGLFB8cHkDsqOFPwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxeJH6q3qtZ7LVxW-B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKVDOBSynFiGjEoPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyJtrhLuMD69U5qw6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]