Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But how!? What math other than the probalistic word-calculater that we have now will result in RSI to come up with super AGI!? These CEOs and researchers either talk from strong economic incentives or from a somewhat disconnected promise of scientific breakthrough. I agree that poorly implemented AI's can/will have harmful results and we should definitely have strong regulatory practices in place, but implementation here is key. Though being told otherwise by the owners of these agents, they are still unreliable, without good use-cases (less than 5% according to a new MIT report) and just frankly shit, and thus won't be implemented wide enough. These machines will not come up with new stuff when let loose on its own research, it will just come up with other versions of the data its trained on. Current LLMs and AIs cannot make up new stuff. They don't have imagination. They are just regurgitating whatever it has in its n-dimensions. The AGI-dream will fizzle out when the money does and investors are already getting nervous. Much sci-fi tech is just plot devices.
youtube AI Governance 2025-08-26T20:2… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwLIEkqV4EGx3tkYPR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzRAZrW2cEydr0kGAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgxtBwZxuWDysCfEKzh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwQWpMxTRk_XmCJBWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugypv6UqLxzw-qvYtfh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]