Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your question. The physical appearance of robots like Sophia is de…
ytr_UgywjazSM…
G
It is like a newborn baby and I hate to say it but the more you teach it the mor…
ytc_UgxK9sQIF…
G
Its just a tool to complete certain task efficiently. The fact people can tell i…
ytc_UgzfObG8j…
G
said blueberrydragon13, then closed the youtube tab so that he can continue chat…
ytr_UgzVxYcSs…
G
What can i say. Humanity never learns. And the funny part is that there were mov…
ytc_UgzDyFZS2…
G
What!? He didn't predict this like decades ago.
Most people following the AI pro…
ytr_Ugy6tEiDl…
G
I think the comparison is still wrong, because you're comparing a medium that of…
ytc_UgyOW6RuF…
G
Vance doesn't know what he's talking about.
He doesn't understand the risks tha…
ytc_UgyLJaQiX…
Comment
After listening to 9 minutes:
Your problem is to not differentiate betweeen closing your hands" and "making the clap noise". Of course for the noise that emerges, you need also the component of speed. So you somehow set wrong premises by overlooking crutial aspects. And the argumental chain that follows is not only your fault. It's the fault of all philosophers we actually know, because they're relying too hard on language.
GPT is never wrong here-it's you and your false premises, and also how you set them. And (sorry therefore) being rude to the AI and truth itself.
To me it looks like the AI is "going down with you" cause it's too nice.
I also noticed that you play with this alignement of Chat GPT for being empathic, helpful and nice, which is interferring with the facts and truth it tries to communicate.
It love your channel and the intention behind it, but the real devil (or daose thing) is you.
youtube
2025-05-26T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyQMtDEWw7JPDfy2Mx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPbWdk_Y8URRJVo6V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz44as2fSeLoTWSCTN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw73stmvm8JluPftAF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjkRUNs9UlorWOgUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwzBq4uruRfx5lfoFh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP8qwpclajcL-EObt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbDRzZRK_amwvu7fd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyPiSKS8fIKLyuALqp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyTeEwFz1IyuV718qJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]