Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No one loves Neil more than me. I think he should stay in his own lane on this topic however. When he says we’ll just unplug it if it goes rogue that right there is telling me he is not educated well enough on what’s actually going on in the industry. We do not have any way to just unplug it if it goes super intelligent. We also do not have any way to align it if it goes super intelligent every CEO of every AI company acknowledges this and says oh don’t worry we will figure it out someday or somehow or we’ll just figure it out when it happens!? Please educate yourself like I have in the last week I have gone from the same mindset of yeah it’s not gonna be a big deal. Maybe you know it’ll happen and we can just unplug it. The situation is way darker and way more scary than you think. If you actually have kids, you will be the first people to go and watch these videos and seriously question. What is going on right now in the world. I recommend. liron. Just type LIRON and ai safety. Or TOP AI RESEARCHER WARNS OF EXISTENTIAL RISK. You will very quickly realize some of the smartest people in the world like well-known physicist you’ve been on TV and people who have been deleting researchers and contributors to AI in the last 20 years are now backing off and saying hey we really should for treaty to actually stop the pursuit of super intelligence.
youtube AI Responsibility 2025-11-25T13:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwwkOrKX0I4sRA41CR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxm7KXDV7-nrm_5Z6t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx_82_QfTn6Ocubhzl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyjmKWTRi30rqyxy3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyQ0XRkZyiufKB2oIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz0RJQYGSKSNgCHAZ14AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXT7PwZfQGZxNsNJd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyeDXD9x-_lVhRFFkV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzCgRbX6txKqEBJvr94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyVUL5IUdPLBXoO7iJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]