Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots and AI are about to take of and advice so quickly now t believe in the n…
ytc_UgzRXls4G…
G
For about 90-95% of people in the country, education isn't a good idea. After al…
ytr_UgwC7D4gr…
G
I’m calling B.S. right now…everyone who is close to, or in the AI industry, incl…
ytc_UgzB3tysm…
G
Wow! Took one for the pedestrians aye!? Thanks robot, I hope you fixed it, park …
ytc_Ugwbogzzf…
G
It's Tesla cult culture in action: "I was taking a nap while I had the broken be…
ytr_UgxSKK9_G…
G
I think ai should be banned for art since im a fellow artist too and i kept hea…
ytc_UgzrksYXb…
G
Ai could totally do this. It's only a matter of time until they realize the true…
ytr_UgwyoDVdS…
G
Maybe the AI will stop making the boring movies that nobody watches that win the…
ytc_UgzUP9oNo…
Comment
The AI2027 guys are reifying "intelligence" into a scalar property that can recursively self-improve without bounds. This is an ontological error. Good news, obviously. My research shows that agency is distributed and emergent, not a property of individual entities. And these systems are entangled with infrastructures that impose hard limits. The doomer narrative needs the AI to be an agent with goals. But agency in these systems is relational and emergent, not intrinsic. That's not reassuring necessarily since distributed agency can still be dangerous, but it means the recursive self-improvement story doesn't work the way they think it does. Reboot.
youtube
AI Jobs
2025-11-18T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsWu3-vLuDxd6pZHJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugzkkxk_d1hug7wGjPJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4uQv-y33zI6cC99J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9bgwRVdbc63peWth4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwEREuk7Lzbk6QzugJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmfyNfvQ3jaY90XDV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPH8Y9twa6i0eG8QN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwRTj12gjyDxPDFeVp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxdQmaVeEOKWPBQBDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgzRsXb63Cf7-kwQ3Mt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]