Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think I'd like anyone (AI or not) continually complimenting me and makin…
ytc_Ugy5RQExd…
G
U can also have a massive iQ and have no common sense can ai bots have common se…
ytc_Ugyo6sDPf…
G
@laurentiuvladutmanea "But these programs are not somebody, and are not capable …
ytr_Ugyz8bIyN…
G
When we turn off tools such as code execution or calculator, all AI (LLM) models…
ytc_Ugz8IJw6a…
G
Their right in one thing. AI isn't going to disappear, but real artist aren't ei…
ytc_UgxwswXZK…
G
And that's why he is in 1 Hour 38 Min of red flag by the Ai system. It accuratly…
ytr_UgxhfmPmp…
G
I've counseled (in a religious role) families where a suicide occurred. Humans o…
ytc_Ugx60U4qL…
G
Why don't we worry about Letting science putting nuclear reactors on the moon in…
ytc_UgyPpw_MO…
Comment
Agi will never happen, ai models are primitive a form of filtering nothing like how a human comes to a thought. Ai models don't think in abstract terms, a human's thoughts, ideas and beliefs can change and are adaptable based on new information. Ai models use the most common information and don't know right from wrong.
Ai is the answer to a question nobody asked, it's over hyped, pointless and will cause environmental damage beyond anything we can imagine. It will also be used to subjugate.
youtube
2025-11-29T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzeRliUtoAEyiFGveF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLGQc0R9ZdApjB2Gx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnUCUurzu5biN2mM94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz36WPoaGsmALDHbv94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyIKNvll0bTZxZ0iLp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-zaJ8EmoSYC-1qMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyb07L8aU1TKUvr7Et4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgGrs5yRK1bVZjw594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxn5XhyLAXP2Bt23Cp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0ivoq2KlBKjKxesJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]