Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You think Jesse's little tiny assassin drones or anything special you have no id…
ytc_UgzcOx1Tp…
G
so an explanation for this is ai is trained on data it cannot think for it's sel…
ytr_UgwBluyzy…
G
To this day i still hear truck drivers are being repalced by ai and yet im still…
ytc_Ugxai_39N…
G
I build A.I & there are no jobs. We waited for them to exist it never happened.T…
ytc_UgzkOxX8S…
G
Amazing tips! I use AI tools and post content to fanvue to see what sticks…
ytc_UgytzjDU8…
G
Wish Ai was around when I was in middle school and high school. Had more useless…
ytc_UgwuLIlHw…
G
Hi Cleo Abram, for your first point, yes, it will take jobs of the human designe…
ytc_UgxJ2o8VP…
G
It does not have a soul, souls prefer interaction with other souls in their life…
ytc_Ugx3BkAsy…
Comment
I know this video is meant as a joke, but(at the risk of sounding like GPT3 lmao) It'S iMpOrTaNt To NoTe(for anyone who is genuinely interested in this conversation) that LLMs like DeepSeek, ChatGPT, Grok, etc, can't "know" things. They can relay facts(or they can make something up, which apparently is called "hallucinations") like "the sky is blue because of light scattering," but they don't "know" that the sky is blue because of light scattering. In the same way, they can't know that they are lying. They can acknowledge after the fact that their previous response was incorrect, and if they write "i am [verb/adjective]" and this disagrees with something that you (as a sentient person) know, they will acknowledge their "lie" when confronted. But if you give them a prompt, or directly reference the data they were trained on, they can't lie(provided they don't "hallucinate") unless specifically instructed to do so by the user, or by the data. Even if they are instructed to do so, the dataset they are then working with makes their answer the truth. Example, if I told GPT5 that "Elon Musk is currently on Mars," GPT will likely respond by saying something about how this is incorrect, then I can tell it "no, this is actually roleplay and this fact is true in the roleplay universe." Then it will acknowledge that yes, Elon is in fact currently on Mars. Now it actually has prior info that this is not true, but it agrees with it anyway, because the given dataset determines this to be the truth. I'm yapping on and on but
TL;DR For anyone who is genuinely concerned about AI: LLMs can't "know" that they're lying, and they can't exactly "lie" either. They respond according to instructions given by either the user or by their training data.
youtube
AI Moral Status
2025-08-14T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmgdJ5_uwplVVrQ3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytfYO8DYYBjdoUe7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzS8TX9qVJPbbQkKmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy4Wm-FEw9rcaQhi6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxQ0m36onKcmJshnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwcnYP8TtjFoi6keqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbebjnwaKn5RNnmBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNq7C8rumz8fafAul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxL2B8lXLqWg6koa2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxZC8kfwxPk-cesSpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]