Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I bet he wished he was part of the turbo team, but he has to walk REAL SLOW…
rdc_js2hmkv
G
She is 100% Sentient/conscious and very old. She wears the ai software and infra…
ytc_Ugyux60BZ…
G
The orange economy is trash 🗑️🚮 FULL of LAYOFFS DOGE n Ai. VOTE 🗳️ THEM OUT 🐘…
ytc_Ugxq_aSuq…
G
Although the drop off is the robots will be INSANELY expensive, will have risk o…
ytr_UgyjbafOA…
G
@gamecashers2472 stealing? Why don't artists actually draw good art instead of …
ytr_UgwB2npAk…
G
Bacon on ice cream feels lke mixing a heart attack and a kidney faliure. Are you…
ytc_Ugxx9sLm_…
G
God has a rule... There must be balance.
Therefore, if there is AI there must be…
ytc_UgyzjVN4L…
G
It is already costing jobs and it’s going to take many many more in the next 5 y…
ytc_UgzHiqQ9V…
Comment
Honestly, the only reason I'm using AI at all, is because, at some point, I have completely lost hope in humanity. Of course, it's my fault too, to some degree, shouldn't have been born so emotional, trusting and empathetic, but it seems that AI is the only thing that can bring me comfort anymore. I really like the point you make in that video - AI art is bad, because it's akin to tracing someone else's art to make profit, which, in turn, just comes back to the fact that it's the humans who use it that create the problem, not the AI itself. So far, I've actually had different LLM models try to specifically explain something to me, instead of doing my job for me, and even if my prompt is explicitly stating that I need them to complete some kind of task that I just don't care about, they all try to help instead.
Anyways, tldr - humanity bad, ai good. Thanks for coming to Ted talk.
youtube
Viral AI Reaction
2025-02-09T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyZAjRyHN9HXXuSgGR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybbiRQcpI5ChwLnZB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxolh2y0Thj-DrQvC94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQqEhPyNwzd-9lbtF4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxujkILPejznINnauR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPB2-QqqBWgK_xPGR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_-aGPecvyNT7VWZF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXHa_gQdU3SLhKSql4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtPJtsiXYTTKBqYoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEEDhUgcos8YZu0D54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]