Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sounds fishy to HARD PROGRAM A.I. about religion. Why wouldn't you let it "learn" on it's own? Any hard programming precludes sentience. If it doesn't know, why can't it just respond with, "I don't know?"
youtube AI Moral Status 2022-06-27T23:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyhNF75zX2Q3lHl_W94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzxOhHVDyA_QKx1wpV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxUPRfAlkDYWXHO4I14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyQab55mBTXfHwdmZR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2yGC3j0VET1BlTCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]