Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lmaoo ChatGPT hallucinated crazy. You know that AI is programmed to agree with you right? Not once in that whole convo did ChatGPT tell you no. And when you are wrong it still says you're very close. Or in this case Apple.
youtube AI Moral Status 2025-07-23T12:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8lGkxPe0kbfQ6Bzt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZVNsV44uzNsbY4UJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyuJ1XYvAE8uBz5uHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwBSRUcKVa8S-GO6D94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzbEfleX8NpMneInP54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzMzaflsaI7lR-G5pt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwMwc8VzCWMzJXNfxV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxsPuweia7plmm5KrB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxK9EsIe8f0cPNZYJV4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztNXRbsmWECldWNxB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]