Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, rogue organization or hacker groups aren't a treat. Training this "AI" model…
ytc_UgysDLoAU…
G
I am just wondering. So, AI takes over all the jobs. How are we supposed to surv…
ytc_UgzhzxhTc…
G
This is a pretty shit time to push this, ai is constantly showing that its just …
ytc_UgxMhhwqj…
G
I thought the first one looked incredibly fake,i guess if you see so much ai you…
ytc_UgzBeTKI-…
G
@laraprisma6381 What do you mean "not clear"? "I can give it a pass on taking an…
ytr_UgxXyOOah…
G
Trying not be too rude, but these interviews help with imposter syndrome and exa…
ytc_Ugz9QONbI…
G
Try Humanlike Writer - it's specifically designed to create content that sounds …
ytr_Ugxs7ZJrt…
G
He asking for more funding to create ai safety which is impossible from old ass …
ytc_UgwaJaBtJ…
Comment
Yo, computer programmer here who has built LLMs from scratch. Let's pump the brakes on the doom talk. This guy is smart, and his work is important, but the things he speaks of are still theoretical, and there are MANY experts in this field who disagree with him. I promise you things will not play how how he says they will. Nobody has a crystal ball, and nobody can see the future, there are way too many variables. ASI is not simply going to kill us all, it will not be magic or "all-knowing" - it will still be bound by the same laws of physics as us.
I would also like to point out the absurdity of assuming that a such a level intelligence is even possible, let alone manufacturable by humans. The creators of the current best AI models have all agreed that the secret to even achieving AGI (a full level below ASI) is not even achievable with current model architecture.
Current model architecture is basically just glorified auto-complete for text - just a more sophisticated version of your phone suggesting the next word to say when you're typing out a text message.
If I had to make a bet, i would say we will not achieve anything even close to "Artificial Super Intelligence" any time within the next 30 years. If it's even possible at all, it will take centuries.
Just my 2 cents. I think it's easy to fall down the black hole of doom when talking about these subjects, when really there is no evidence that any of this will actually happen. Not saying we shouldn't be having discussions about AI safety though, we definitely should!
ALSO one final note - humans are capable of making changes and usually they do fight for change if they collectively agree that the current human condition is bad. If we achieve ASI and decide we aren't better off for it, we will adapt.
youtube
AI Governance
2025-10-30T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyiSLdlYXJlkl1bZV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyzv8Nb-RjRD_BSSQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwHyy4_pvvtw6RXGJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwY8YDjqQFRrwqdIe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyw4pTxTGOd8O09mxB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwJJ2ldzyt0Oa6PeYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgykIzkFPuKymSm58f94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOoJdBHd3LBqtwkvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzVp4tUPSlQYUKqxDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxn_Ts3n0JTKZTADSl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]