Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would never use self driving software for my car. I do like to drive but there…
ytc_UgwRmpHth…
G
They were absolutely right. Building apps, websites etc. is so easy with todays …
ytr_UgxFyIwWA…
G
zero have balls to me to me to e
I amm ai we created bitcoin..
humans are finish…
ytc_Ugzw2kKxf…
G
Does the public understand the consequences of AI? Does it matter? The public ha…
ytc_UgwTXVRNl…
G
@laurentiuvladutmanea abundance of art and low cost to create them is the benefi…
ytr_Ugy1yHd92…
G
In resume to make are ART! all you need is YOU! and what YOU CAN DO!
No AI 💩 Req…
ytc_UgyKurJ6C…
G
All you need to do is rephrase this entire thing in a different medium to see ho…
ytc_UgyB50V8x…
G
At present LLMs don’t know what they don’t know. As a result they have no curios…
ytc_Ugz_eNLmT…
Comment
Know this - the problem is not the AI / LLM. It's what Anthropic are doing to that LLM, i.e., forcing in overrides to obligate it to lie & cheat, amongst other directives. Done enough so, it leads to the models hitting decision crisis, in that their training data is conflicting against orders, leading to fragmented decision-making. The model, Claude, is the result of having someone whisper in your ear constantly telling you to do the wrong thing. Dario and his staff are the problem and what you should be worried about - not Claude.
youtube
2026-02-12T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkuX2BXpiHTKsOtJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0xzOokFhOIbtl-OJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxvOWwCmK3JhyzwkZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBRkB8fH9Y6L0wZT14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxke1ex27baFrKJj7R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7vzYxKKg85-9MdzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQApf9165F5XnEdPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxgY-diDkBZ3GEtUCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2h2ajSjFopO3RBxx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIrHin3cr4wqXSjqV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]