Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Funny thing about A.I. is that it LEARNS. Notice that at no time did he EVER tell Chat-GPT4 that the answers it's providing are unethical. From the A.I.'s point of view, it's always a bunch of @holes asking it @hole questions, giving them what they want. How about telling it, "No, that idea is unethical. You need to learn about human rights and common decency." Chat-GPT4 is only a REFLECTION of those that use it.
youtube AI Moral Status 2023-05-15T06:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxdIiQ_lJl79Pfm4w14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYuugk9I9yDY3OMaR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypoKm6FbontZTuen54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8lKbcmWwb43X_sz14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyv0az3SXFPAQXPSil4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZNn4JRKZoTc8wocR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy-p-UPpoxAsoOC_wR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQMJ1tSFCSU-Thj5R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRzKwvH6m7yZTG2J14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQ4mlV7N6jX8Lf_KB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]