Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When AI gets a hold of black Magick, it will probably bring to life the game "Do…
ytc_UgwHa6gpC…
G
I get what Penrose is saying and I think I understand it (being conscious after …
ytc_UgwKmpUpA…
G
i think AI is here to stay yes and i don't even think it's necessarily a bad thi…
ytc_Ugy4w3-z5…
G
Did you know, a bunch of extremely intelligent people, like Steven Hawking can m…
ytc_UgiWDiGIh…
G
A- Sophia the robot has a Saudi Arabian citizenship
B- Japanese robot kengoro ha…
ytc_Ugxd6jErZ…
G
It all comes down to two possibilities with AI.
First, most likely scenario is i…
ytc_UgzTa2K30…
G
Its fine if they let the AI and robots take over the jobs, but they also gotta g…
ytc_UgzVJE7qd…
G
There was already a robot that unalived itself because it didn't want to be on t…
ytc_UgwQueV0W…
Comment
And certain groups of people still keeps saying(and insisting) "Nah my job is safe. Also new jobs will be created!". To put it in "modern terms" - those people are sniffing hard copium. Even if certain new jobs will be created, the amount of them will be insanely improportionate to how many jobs will go into decline.
Either governments (and corporations) will do some serious regulations(and good ones at that... which isn't going to happen), or it's as Yampolskiy said - we'll be facing a 99%+ unemployment. In scenario where there would be an universal income, and one high enough to sustain people (and do so at a reasonable level), it may be not too bad as long as people will find something to occupy their time with (and it would be something not destructive). What about a scenario where there isn't a universal income? Or it's way too low to cover expanses that are needed for a healthy life? And what about a scenario where AI decides that humans are a total waste or space and resources and/or are too volatile? Yet "The most intelligent and richest people on the world" insist on ignoring safety, morals & regulations. If was at the top, I can't confidently say what I'd propose as a solution, but without being a pessimist nihilist or so - but simply someone who just realistically looks at it - I'm pretty sure I can confidently say that, objectively, we're seriously going head-first into some kind of total global collapse, into apocalypse.
...I probably just said what many people are thinking. Question is... what can we do about it?
youtube
AI Governance
2025-09-04T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztYyOPwuR7FcCZoKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFG234UINhBiGViWV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxA5PhgivBKPxZCd6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHj6OuIA_Dr0t2yNp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVIBGJJI5Lm7w9bvJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwxT4qXtUdfwBwwDph4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzhII5FEQ8J1bjbtTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwhUOB7LBDQ7XqjdQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbrdFQaLLpWXclS054AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFV4D2y9W6_ihDyZt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]