Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its going to create a dystopian society, where there will be the very rich and m…
ytc_UgwZVjEBl…
G
The first robot I saw 26 years ago walked stiffly as if it had a carrot up it's …
ytc_Ugx7MhGV2…
G
everything what is suitable for those in power is ok , no matter if unsustainabl…
ytc_Ugy9GIq7u…
G
Question- How do we not know that we are already in an existence that is one of …
ytc_UgyqvJO4R…
G
This is a serious subject, but then the thumbnail for this is overly dramatic, a…
ytc_UgyjMD04e…
G
Is this ai or real life .. imagine that robot turned and lit everybody up .…
ytc_UgzqMJABi…
G
It is "almost guaranteed" that AI super intelligence will be developed? What ?
…
rdc_kqsx7p4
G
Even notepad++ has function auto complete, sell this as AI improvement is close …
ytc_UgyTMFxb0…
Comment
If people do not take reasonable care, and a person is hurt or killed, the person who did not take that reasonable care can go to jail. That happens all the time. Left an unsecured gun out and a kid gets killed? You go to jail. If Sam Altman and the rest of these people faced the prospect of dying in prison if they build an AI and give it access to the controls necessary to operationalize killing people, they wouldn’t be talking in the cavalier way they are and they wouldn’t be at all tempted to roll the dice with a 10% chance people will be killed. AI should not be given control over systems, it should be able to recommend only and people implement the actions if they make sense. In no situation at all should a computer be given control over the nuclear button or the ability to control the power grid, or deliberately destroy crops, etc. If a company does that, it should be a crime that the people involved go to jail for.
youtube
AI Governance
2025-08-30T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwB7V4CYxOmBTfx7sl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzu33lIKGJ07SD1BIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxhOjRpNDYmovr1TsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMs7Rxh1zab6V6Uvx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMXinURlHf8LYW0X14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]