Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And certain groups of people still keeps saying(and insisting) "Nah my job is safe. Also new jobs will be created!". To put it in "modern terms" - those people are sniffing hard copium. Even if certain new jobs will be created, the amount of them will be insanely improportionate to how many jobs will go into decline. Either governments (and corporations) will do some serious regulations(and good ones at that... which isn't going to happen), or it's as Yampolskiy said - we'll be facing a 99%+ unemployment. In scenario where there would be an universal income, and one high enough to sustain people (and do so at a reasonable level), it may be not too bad as long as people will find something to occupy their time with (and it would be something not destructive). What about a scenario where there isn't a universal income? Or it's way too low to cover expanses that are needed for a healthy life? And what about a scenario where AI decides that humans are a total waste or space and resources and/or are too volatile? Yet "The most intelligent and richest people on the world" insist on ignoring safety, morals & regulations. If was at the top, I can't confidently say what I'd propose as a solution, but without being a pessimist nihilist or so - but simply someone who just realistically looks at it - I'm pretty sure I can confidently say that, objectively, we're seriously going head-first into some kind of total global collapse, into apocalypse. ...I probably just said what many people are thinking. Question is... what can we do about it?
youtube AI Governance 2025-09-04T12:2…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgztYyOPwuR7FcCZoKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFG234UINhBiGViWV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxA5PhgivBKPxZCd6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHj6OuIA_Dr0t2yNp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzVIBGJJI5Lm7w9bvJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwxT4qXtUdfwBwwDph4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhII5FEQ8J1bjbtTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwhUOB7LBDQ7XqjdQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbrdFQaLLpWXclS054AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFV4D2y9W6_ihDyZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]