Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
14:40 OBVIOUSLY brains don’t all function at the same level of proficiency, thei…
ytc_UgytYL1Ys…
G
The time in question:
¡Right before board members decide to replace the CEO…
rdc_m28biqr
G
The guess is wrong on one thing that most people won't realize until AI takes ov…
ytc_UgwKxWU4h…
G
if the robot's dont have an feeling it should be like hal 9000 or glados…
ytc_UgwOauXWT…
G
I knew about some of the problems involved in "AI" but this is a complete pictur…
ytc_Ugwexuei9…
G
Plot twist: ai said I will die too if you just like a real girl 😂😂😂 lol…
ytc_UgzhuDqzU…
G
I started off horrible at art, but I’m getting pretty good at it. If you practic…
ytc_UgwJvrZ_u…
G
The ethical considerations around AI development are paramount, and I'm glad to …
ytc_Ugwd9Ol95…
Comment
The video totally overlooks the fact that AI is hitting a massive development wall. It’s not just about "safety" or "scary tech" it's about the economic bubble and scaling stagnation. The tactic of just throwing more powerful hardware to the LLM's has totally stopped scaling. And with an internet being filled with AI, we’re running out of high-quality data, and the cost to eke out even tiny improvements of these models is becoming exponentially higher than any actual ROI.
The real reason researchers might be jumping ship isn't just "panic" over AGI: it's likely the realization that the current AI path is a financial dead end that costs way more than it can ever earn back... A bubble that is gonna pop no matter what.
youtube
AI Governance
2026-03-17T03:5…
♥ 12
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzpZB2TwIDXQwv6Zcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxWkOCjBPsxNPNil3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwES1E11rMPgx7jlR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwcuJSMfFvBtvURPZV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxieUY_SVr4KHRGD_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxll8Gz7SsOcqr3SpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY_vzB08zSWLljGGl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyEYKV9ah0Y7WJ3eiZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwCbE5lMXvD9rR_Hmp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXoPUvHeehIzSBZP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]