Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So here's a thought experiment for you.
Some guy uses Ai art to composite manua…
ytc_UgyPqpqOw…
G
Poisoning doesn't really work against training loras. All poisoning does is to c…
ytc_UgwKkbPJi…
G
As a programmer who works with AI on AI, I will say I don't care what people say…
ytc_UgwC9pWTH…
G
So we need to figure out how to kill an AI. How do we murder the AIs?…
ytc_UgymwQp3b…
G
sir so there must be an act or rule on this topic that no AI or institution can …
ytc_UgwP1tgPF…
G
25:46 For the same reason that people still pay for Netflix subscription even th…
ytc_UgwypLWw6…
G
Yeah I saw something like that too, it's a really good analogy! A similar one wo…
ytr_UgxZQKbbc…
G
I dont like the thought if AI, however, I have lost a lot of confidence in many …
ytc_UgwK5h_rr…
Comment
So I’m only at the beginning of the video and I probably won’t end up watching the rest of it. My attention span is just not that long. But I figured I would say that a lot of people have a misconception of AI and it’s really hard to explain it all in a YouTube comment but they’re different types of AI’s and LLMs are the ones most people talk about and arguably. These are less advanced than other AI but they’re the most common when people talk about AI and these models are usually designed to always give you an answer and always give you the answer It thinks you want. Basically when you program an AI and you never let it give you and I don’t know or any kind of unsure response. It will start to make stuff up because it knows that it can’t say I don’t know this is a common issue with AI’s like ChatGPT. And then they will also try to give you the answer. It thinks you want even if it’s not actually the truth so a lot of people have been doing these tests where they basically seeing if the AI will take over the world or will harm humans and there’s so many factors that can determine its decisions on its answers but a lot of the time it’s AI giving you the answer it thinks you want or people in general would want depending on the AI model as not all learn you specifically if that makes sense again this is a really cool topic. I love talking to people about it and it goes so much deeper it’s just already along YouTube comment and there’s so much more that goes into it. I’m also just using speech to text so hopefully everything came across the way I meant it to.
youtube
AI Moral Status
2026-03-15T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyZnX5b2n2Tx4ExFM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyeCPmdzejlWonDQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbojNuzNsXsOhlKt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGRDKzOGENpB5Ffoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzS-DLlKsLp9cAjCDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4zI-iu6f6Ud5rp2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz_9l9I9bg5eDMHvD94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOa3CcAd3Oqsx65zN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzBJwJZdKMf-uFgme54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzlqGTYzKIBEE8kKF14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]