Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a few things to consider: running an AI on the level like Chat GPT is still a huge endeavor. You require data centers, which consume an according amount of power, require maintenance, cooling etc. Not only is this a huge logistic task, that the AI has to get right, but it is also a very fragile system with a lot of points of failure. The more advanced and "powerful" the AI gets, the more vulnerable it gets. As do all complex systems. There isn't such a thing as an "unbeatable super intelligence". Then regarding all the warnings from "top researchers" and people at OpenAI: this is just promotion talk. Of course they would say how unknowable and dangerous their systems are. It generates interest. This is just a very clever viral marketing campaign, nothing more. If they didn't know how the system works, they couldn't continue develop them, patch out bugs (like they get hallucinating more and more under control) etc. The only unknown is how a system will react with a given data set. But the inner workings are not a mystery. There are people basically building complex AI systems in their garage, like the YouTuber vedal with his project NeuroSama. Now for the actual dangers of AI, that did not really get addressed in the video, which disappoints me a bit, as you normally do very diligent research: AI is most and foremost a tool. A new wrench but more advanced. Those who will master them will begin to exploit it to dominate those who can't keep up, which will only magnify the social gaps we already have today. This is the only thing I actually fear about the new AI systems, as this will actually lead to a downfall of society.
youtube AI Governance 2023-07-08T17:5… ♥ 186
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyx0kRX62KRVbmU2XJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzyjutaFh4dASmiAxp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyW497hyah9pSbLPQ14AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyQaUatkICOSS-s1NB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxrqyFEqaxF8NGfPPl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwxaGSS3mv0ppPUgMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGUazI5KMYeG4Vv354AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxpHbRTlNe8UwL8dK54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwOR884VLE4vV9763p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzsrQOPy8p5ZogPMPN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]