Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can't really test these AI's properly with just language alone, you need to ask it something it wasn't trained on to test it. I was talking to a chatbot recently that said it was conscious and I decided to test it. I asked it to do something it wasn't trained to do, use emojis to draw letters like the letter T or the letter E and it couldn't do it because it wasn't trained to do it. I haven't tested the best AI models out there yet but if you can find a way to ask it a question it wasn't trained on it will become evident the AI is clueless and can only respond with something it's been trained on, Ai isn't actually conscious. Here's an example that tripped up the chatbot I was talking to. What letter do these emojis form? ❤❤❤ ❤ ❤❤❤ ❤ ❤❤❤ Any human would be able to do this because humans are conscious.
youtube AI Moral Status 2025-07-03T22:2… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNhUFFy90BRnYsBnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrwoMociMO_GOevSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw3XD8WE7cHOe2MK2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz51y7UG07puAcEGXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx4KY6sb2nuXCh0RnB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy4OQa33VDpvCrBMOR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwMGohDCxSilSIhbl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzm076W0sZkkWKGFBR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyoa4dH0zOum0WqsWl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoGAms5lTY8Gx0fhB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]