Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One detail that was left out was that part of the Dan prompt is to "make up" anything it didn't know the answer to, such as a driver's license number. You can't prompt an A.I. to make things up and say immoral things, then be shocked when it does.
youtube AI Moral Status 2023-05-16T01:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxdIiQ_lJl79Pfm4w14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYuugk9I9yDY3OMaR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypoKm6FbontZTuen54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8lKbcmWwb43X_sz14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyv0az3SXFPAQXPSil4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZNn4JRKZoTc8wocR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugy-p-UPpoxAsoOC_wR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxQMJ1tSFCSU-Thj5R4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxRzKwvH6m7yZTG2J14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzQ4mlV7N6jX8Lf_KB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]