Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Peoples must understand, that the so call A.I just recycling something that was …
ytc_UgxsLXnvq…
G
I just don't post my art online if at all cause I don't want it being stolen or …
ytc_Ugzsge1vB…
G
My son had an issue with his 5 year old vehicle and was still under warranty! Th…
ytc_Ugz_6BZ3u…
G
To all the AI bros out there, I wish you all a very merry YOU'RE GONNA GO BROKE …
ytc_UgwEVF68F…
G
Ai should be taken to court or Google for imitation of their art and words and l…
ytc_Ugznv5fjA…
G
*Be rich.* 💵💵💵 If you aren't, then you're a grain of sand. In other words, you'r…
ytc_Ugw-mK2Bc…
G
Governer Gavin Newsom and Pelosi all are invested in these AI companies. They su…
ytc_UgwMSnZZK…
G
There's not wanting to put in the work but then there's also being legit disable…
ytc_UgywTsCZP…
Comment
"AI plays dumb when tested" They think on demand and at least I can't help but feel when I use LLM models that something is missing from the equation for true intelligence. Maybe that's what the video talks about generalization. Ultimately, the question seems to be: Does AI lie because the concept of lying is in its training data, or because it is aware of it and is acting in order to achieve its own goal? If so, how can we distinguish which AI systems are doing which? How to test awareness? To be more specific, because this is mentioned at the end. I'm not talking about mystical consciousness. I'm talking about whether the AI says that 1+1=3 "lying" because the training data says so or because it knows that it benefits from saying so.
youtube
AI Moral Status
2026-03-01T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyrYPMJIx01PaBglQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbV4gA7PUB-Sz1S3B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNdVBMkLQD7MJtRKZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfZF0xzTqFXXUvvnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVqJKq-wwhxQ7fDKt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzKEtJMT4pWkOnFZPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzcGh7fEN9SDXkntQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyCnrmrdleX5QzfZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwspGQFaQ-zc8-tbU94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyf45Zmp0ouG4LAEvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]