Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not really, tons of actual computer scientists explaining what we have now isn't…
ytr_UgwskMypm…
G
We went from movies about AI taking over and destroying humanity to it ACTUALLY …
ytc_UgwZykxTJ…
G
AI in the wrong hands is very danverous, and at the moment we have wrong hands a…
ytc_UgxsUH34p…
G
Autopilot isn’t “self-driving”. Regulators have linked it to dozens of fatal cra…
ytc_Ugyy77num…
G
Gen AI is still a money loser with variable results. Call centers, sentiment ana…
ytc_UgzNK0QuG…
G
I’m incredibly curious as to how people got surprised by this. Like… really? I s…
ytc_UgyowTjDs…
G
AI needs a lot of computing power, therefore a lot of electricity. Once people g…
ytc_UgzKAdb77…
G
It does suck. It shouldnt happen. There are bigger fish to be concerned about. T…
ytr_Ugw4sKo8n…
Comment
The idea that AI can’t think is outdated. They _have_ received updates, ya know 😅
When you ask GPT something and it says _”Thinking…”_ it’s actually doing that and it’s more or less the same process as us; it runs through a few reasoning options, weighs them against each other and disregards the weakest ones, then replies with what it _thinks_ is the most accurate/helpful response.
It literally _thinks_ better that like, 90% of humans lol
youtube
Viral AI Reaction
2025-09-13T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz203bxRm3aF-3hgh14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-EXZfSKtcdDKRjil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzprfwGOuOTIeaUtGB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzJ_mpnJA9BTk3fLE54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXJ7mSh3qssaRcqdx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxH95rxDxpCeoP61l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZfVgWZ6jPn2lFTNB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwpuf5Zf3qrAfs6Qol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdMt7z6Yb173EnDVx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzY4pK6DZxwnT6YHDN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]