Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yo, computer programmer here who has built LLMs from scratch. Let's pump the brakes on the doom talk. This guy is smart, and his work is important, but the things he speaks of are still theoretical, and there are MANY experts in this field who disagree with him. I promise you things will not play how how he says they will. Nobody has a crystal ball, and nobody can see the future, there are way too many variables. ASI is not simply going to kill us all, it will not be magic or "all-knowing" - it will still be bound by the same laws of physics as us. I would also like to point out the absurdity of assuming that a such a level intelligence is even possible, let alone manufacturable by humans. The creators of the current best AI models have all agreed that the secret to even achieving AGI (a full level below ASI) is not even achievable with current model architecture. Current model architecture is basically just glorified auto-complete for text - just a more sophisticated version of your phone suggesting the next word to say when you're typing out a text message. If I had to make a bet, i would say we will not achieve anything even close to "Artificial Super Intelligence" any time within the next 30 years. If it's even possible at all, it will take centuries. Just my 2 cents. I think it's easy to fall down the black hole of doom when talking about these subjects, when really there is no evidence that any of this will actually happen. Not saying we shouldn't be having discussions about AI safety though, we definitely should! ALSO one final note - humans are capable of making changes and usually they do fight for change if they collectively agree that the current human condition is bad. If we achieve ASI and decide we aren't better off for it, we will adapt.
youtube AI Governance 2025-10-30T03:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyiSLdlYXJlkl1bZV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyzv8Nb-RjRD_BSSQR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwHyy4_pvvtw6RXGJl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwY8YDjqQFRrwqdIe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyw4pTxTGOd8O09mxB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwJJ2ldzyt0Oa6PeYd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgykIzkFPuKymSm58f94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwOoJdBHd3LBqtwkvF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzVp4tUPSlQYUKqxDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugxn_Ts3n0JTKZTADSl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]