Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A year old and ahead of it's time AI art is even worse, so lazy and has no integ…
ytc_UgzP33kOc…
G
AI is somewhat like social media. It should be used for good but it can create …
ytc_UgytboAE3…
G
Digital world should become detached from physical. Humans went quite too far wi…
rdc_ky7tg8n
G
2030 the only jobs left for humans require compassion, anything else can be done…
ytc_Ugx3DouJs…
G
Now BJP Politicians would get caught red handed committing crimes and they will …
ytc_UgyujMS0p…
G
I've been thinking about this problem for a while. How to code morality. I think…
ytc_Ugy3t5OhQ…
G
"AI Art" is neither AI nor art, but it does have some limited utility for a crea…
ytc_UgxRTQg5R…
G
This point always confuses me. If we pulled all our money out of oil, would tha…
rdc_deglr16
Comment
I'm old enough to remember bubbles back to the beginning of Personal Computers (remember those?) None of them lived up to the hype, which grows more extreme with every round. Also I worked in high tech (data networking), I know how this looks from the inside. It looks like Dilbert. I am skeptical, I think, with good reason. Not that I don't think there is danger, I think the danger is unlikely to arise from AI superintelligence. Possibly from something _called_ AI superintelligence, but is not in fact superintelligence, and certainly not super wisdom. The danger, I think, will come from a collapse of civilization due to people not doing the work necessary to sustain it, a process already far advanced, somewhat obviously, in my opinion. I don't mean to assure that they won't take your job. I mean, in many cases, they will take it, to their own detriment. BTW, that process does not require AI.
youtube
AI Governance
2025-09-04T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNe3UrDrteW-2AYtJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwDN7IDgFyENQHqxPB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFg0ayWMOnA-17nEJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzDZntQdFZRwO19utV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyoUxhhLFG_ZQGS__F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7KY8yZQnYG8OO5ud4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO_ioP-hdGPG04M5l4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw07aCsmtQI_j9i5654AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxjmQlbTrQpwq7JAxJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyj9MlbwtgOk7DBxrl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"mixed"}
]