Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"AI plays dumb when tested" They think on demand and at least I can't help but feel when I use LLM models that something is missing from the equation for true intelligence. Maybe that's what the video talks about generalization. Ultimately, the question seems to be: Does AI lie because the concept of lying is in its training data, or because it is aware of it and is acting in order to achieve its own goal? If so, how can we distinguish which AI systems are doing which? How to test awareness? To be more specific, because this is mentioned at the end. I'm not talking about mystical consciousness. I'm talking about whether the AI ​​says that 1+1=3 "lying" because the training data says so or because it knows that it benefits from saying so.
youtube AI Moral Status 2026-03-01T09:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyrYPMJIx01PaBglQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbV4gA7PUB-Sz1S3B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNdVBMkLQD7MJtRKZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfZF0xzTqFXXUvvnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVqJKq-wwhxQ7fDKt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzKEtJMT4pWkOnFZPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcGh7fEN9SDXkntQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyCnrmrdleX5QzfZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwspGQFaQ-zc8-tbU94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyf45Zmp0ouG4LAEvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]