Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Norway will pay the impoverished country $150m to switch to black market defores…
rdc_ckqhjha
G
Computing was developed to serve humans. Sounds like AI is not that but is someh…
ytc_UgynTPOBH…
G
"hallucinations" please stop using this AI hype bro lingo, they are ERRORS, AI d…
ytc_Ugwk1-S3-…
G
What's the most frustrating, even before thw AI stuff, is that there is always a…
ytc_UgxNOqkfQ…
G
The only AI thing I have used (for a test related to my job) is Copilot and it s…
ytc_UgwJik_vd…
G
what really defines a human being is free will and a robot will never have it…
ytc_UgwB829cG…
G
So this is a good demonstration that the Fermi paradox is very likely caused by …
ytc_UgzXPfF3y…
G
The Future of Work: Smarter, Shorter, and Fairer
I strongly support reducing th…
ytc_UgwBxZt9i…
Comment
at the end of the day, the human one wins any day. because the LLM- oh, nvm, we wrongly call it "ai" one is "ai" stole- i mean generated. it isnt even real ai according to the original definition, it doesnt think nor understand anything. it just arranges stuff according to references. and it is horrible at it. and even if it weren't, i'd prefere a humans work any day, becasue atleast that is something, means something. unlike meaningless useless "ai" trash
youtube
Viral AI Reaction
2025-10-15T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy2WsulvUlptgoLkLZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyZszaQ8dA2Z0_3vTh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7TsY-M54yt3HUwMh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_LytBZXMII7tWw694AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJil_-f6V8Ya6kcH14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfL3scnbaRCCQLsq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkUbqrood7oaMVUdF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxvPAa8QFGi68qZ7nV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwiePkdtzX1Zlc26X94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu3bQLzfnCm9ZnpWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]