Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember when AI was,
"Haha, look! We trained this spider ragdoll to walk! We…
ytc_UgyiYL6JV…
G
Y'all need to stop hating on AI art is real and if you think you're better than …
ytc_Ugz2rGeQT…
G
With or without jobs or copyright issues, my problem with AI is the prospect of …
ytr_Ugy87CcSH…
G
I like how some people dedicate their lives to art just for some people to just …
ytc_UgzbAgbvy…
G
BP should do a segment on the next steps beyond LLMs and agentic AI... world mod…
ytc_UgzZmZPLv…
G
AI does not have a human heart or a biological or psychological or physiological…
ytc_Ugwa7I1Y6…
G
AI IS STRAIGHT BULLSH*T
ITS FULL OF LIES AND FAKERY
NO TRUTH IN IT
AI IS PLAY…
ytc_UgyAGk7rk…
G
The corrupt politicians are the ones who have the most to fear as they get knock…
ytc_Ugx2T_DFq…
Comment
As a therapist, this is the part that worries me. Can you get the validation you’re not getting from humans by talking to an LLM? Yes. _But that’s exactly the problem._ The LLM will humor you indefinitely; a human will not. So yes, it might make you feel better to have a person who will validate you endlessly - but if you come to rely on that, you’ll start to find true human contact too unpredictable and unaccommodating to justify spending time on it. And that will just cause you to isolate yourself further. It’s a feedback loop. Please be careful not to fall in.
youtube
AI Moral Status
2025-07-15T17:1…
♥ 418
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzu0xLjIwnRdv-gJoZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKpOP3i5qoou_iq1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyqQI9hn--szkkbhDd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8pZsWdJqqmMzlG7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw44IP8eBPgks1i-zB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzC_8LoQZEYxTclOuB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyl-gKtgeCvUw3gW5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylATgqt2DujBeeosV4AaABAg","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5Aoyr8V6Xg_WNquV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxJfWszZ6dJIhSnENt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]