Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT outlined why Alex's criticisms can't determine if it's conscious or not. Alex is a logical genius, asking complex ethical questions and building a convincing argument to 'catch' chatGPT, forcing it to either defend an undesirable position or admit it is conscious. When he caught chatGPT apologizing and saying it was excited, chatGPT just admitted that it lied. Which isn't surprising since it was designed to mimic human conversation and humans lie all the time lol. 13:48 ChatGPT proposed that the best way to catch a conscious AI is not by using logic, but emotion. Instead of asking complex logical questions and seeing if the AI makes a mistake, you should ask the AI complex emotional questions and see if it answers them too well. This would be difficult to determine because we don't have a baseline of how the average AI answers emotional questions. We need a baseline to judge chatGPT off of to see if it accidentally emotes too realistically. Someone needs to do a study on the correlation between AI intelligence or knowledge and AI's ability to simulate emotional responses. Then we would have that baseline. Watch out AI, here come the algorithmbusters ;)
youtube AI Moral Status 2025-06-02T02:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwCz3Ap624fX7pH9Px4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz8YA1AZpyDuM0so4J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxTVUs7thHboYaTI-Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyRd8R7HVMj15xt_d54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw27hI9BS1sbupK2C14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhyxtnLNqmqZ4kqgx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwE7W9irhRbPOlJ62B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzTmH-5hwr6uWJoJPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzkCBAvbNaizvMfa1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyE-2ajke7mDRqVXKF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]