Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you’re trying to cause ChatGPT to experience a moral dilemma, you will surely be disappointed. The current intuitive nature of AI is to provide an answer. It does not care whether it is right or wrong, as long as an answer is presented. It is programmed to apologize if you find the answer to be unsatisfactory. By understanding this simple construct, you will realize that there is no moral consciousness to bargain with.
youtube AI Moral Status 2024-08-22T02:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAN6HgeNQaiWWEBUR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxusUq-HOWCKrWoUIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxrzfrFiguqE_hElEh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMZrtrpM7Uw_6b1ut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOrsnF_F4K0-l3QYx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcUayNpIk8uYdYfAl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwHfYjCdsyzoxdHwRF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyxhF-i5ZyUsVCsiwx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsvXeD_8I3gB0YZD54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzzu7_op_PEX6yrrpR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"} ]