Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wtf is wrong with this guy? he says....yeah it can kill us alll. but invests int…
ytr_UgwT_CyUA…
G
>The NHTSA website also includes a letter from the Austin Independent School …
rdc_nsym4wr
G
My experience with dealing with AI call center was I canceled an order, and it g…
ytc_UgzSThfUS…
G
I want to know who will have the money to buy what the AI will produce…
rdc_mrranuw
G
Fake news. The actual level of A.I. operating right now within the 'elite' NWO t…
ytc_Ugw-8i9ps…
G
When you consider what AI is capable of, just sit and honestly consider how you'…
ytc_UgxJpVZxw…
G
B.S. ELON MUSK is not being honest with us... He pushed Grok to be the fastest …
ytc_Ugwuq4jbS…
G
Here’s what makes AI “dangerous”. It’s like a human who doesn’t have to lie. Who…
ytc_Ugwy3NIsN…
Comment
Every time I hear experts talk about brain strain and AI overload, I keep coming back to one simple thing:
The real danger isn’t AI itself — it’s the fact that most people don’t have a system for thinking clearly anymore.
That’s why these conversations feel so heavy. People are trying to fight an accelerating world with a 1990s mental toolkit. No wonder everyone’s overwhelmed.
I’ve been working on something called a “clarity engine” — not to replace thinking, but to make thinking more stable. It trains your mind, not your emotions. It actually reduces cognitive load instead of adding to it.
If AI is going to be part of our future, clarity has to be part of the foundation. Otherwise we’re just reacting to noise instead of building direction.
This episode nails the problem. Now we just need a solution people can actually use.
youtube
2025-12-02T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzv4eCgCapzMqcR2Dt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwxH9d-aPPr4LAuyG54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE-Pl-GFRJ6Tvz6y14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQIx5RLe0OUAVhViN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwa5wjf8kut5XBjnR54AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHlkXtZRVIByj1Sl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbOnOZZn3sSn0E10V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRkXCJxII9i1zdjXR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz-Q4xfQGVvAsMYZLx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgziujtjgtRg_UGyjjt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]