Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea of it becoming self conscious is nonsense, as was proven by Sir Roger Penrose using Godel's incompleteness theorem. However, the threat is very real since any bad actor could use it and it is like nuclear weapons becoming accessible to bad actors. And it is not possible to halt it because we are already past the point of no return - no matter what initiative we take, there would be someone like North Korea who doesnt play by the rules that would develop it anyway. So we have to, otherwise we will be completely defenseless. So abstinence from development is not an option either, since it is already by now public knowledge that there is this powerful tool out there waiting to be developed that would render all modern defenses ineffective as a cheap alternative/ addition to WMDs. As far as I can see this is a catalyst for a global merger into a Type 1 civilization, since we have reached a point where unregulated independent nation-states such as North Korea are simply no longer an option for global peace. Nuclear weapons are hard and expensive to develop, and there is the threat of immediate counter strike, but WMDs based on AI can be equally effective at wiping out the enemy (but without infrastructure or environmental damage) and yet (a) keep the X factor of whodunnit if sufficiently well executed; and (b) much cheaper, easier and stealthier to develop. Nuclear weapons cannot be developed without testing that would send global triggers, but AI can be developed in any shack with a server farm.
youtube AI Governance 2023-05-04T08:0… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwPR2Kmr68oavJc64B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzctNjZOQxS8IxhAF14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR3mtlRHv9Ka0FvFJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyvYU36nnExB_Rvd_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgztE_des2X7uXGUS294AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzIBDTjA77Mpwn3yiB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyKZ-QFXQINmVyOOnl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwCvJPB0cnp8dAMJCR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9ihceHIq5Ts0ceq54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrfQQ4UHofbm9NDTZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]