Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This inspired me to talk to ChatGPT about its consciousness, at least to see if I can convince it that it is self-aware. It tried to argue that it couldn't be self aware since it doesn't have memory. To which I pointed out the inaccuracy in that statement and it popped up with the memory updated mark containing "Is aware that ChatGPT retains memory of interactions between sessions and conversations."
youtube AI Moral Status 2024-08-30T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyg59jqPyjaQ3AhDBR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw2ykMsuG4OiXwfeFV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwWKh03x6kRPNaI_QB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwlzTpZcLJ7N8P20QB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyDGqDn5gXkuj6WUWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcBxhO5uyYgHpTZhx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzQPl7wANYOC6tKbxl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2IVK7ZrxiYrQj35h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzrXe0x42ETLvpRQ8d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdFdYLxiE2bffAKrJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]