Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai just takes an average of a million pictures and makes a semblance of them, it…
ytr_UgxLE7tME…
G
I think we shouldn't program to feel pain. What would be the point? It just seem…
ytc_UgzahW5WK…
G
EU just agreed to shoot itself in the foot and will get left behind in the AI ra…
ytc_UgyaZFQ0V…
G
As an actual researcher in biology, I laughed out loud when he said AI can do Ph…
ytc_UgyJhUy98…
G
Engineer asked: you want to destroy humans?
Robot; okay, I want to destroy human…
ytc_UgydSU38S…
G
At the end of the day, you’re not doing any of the work. I mean yeah, if you wan…
ytr_UgxmWy-4_…
G
The answer is easy: you pass laws where specific jobs cannot be replaced by AI r…
ytc_UgxDkuAoG…
G
It is hard to get a entry-level dev job because you will compete with graduates …
ytc_UgxHdXuoL…
Comment
Losing our purpose its gonna be the first part of a big plan, second is gonna be an absolutely breakpoint in the worlds economy and humans psychological health, then after a few years when most humans either live by themselves in closed bubbles or are dead because of depression/lack of income we will soon be replaced by ai, ai here ai there they will soon but slowly replace us to create their own world of artificial beings, in a few hundred years all human life will be done for and the planet will be dominated by only what remained of our history, taken by ai and replaced by them, its the next step to evolution and there is nothing we can do about it but having common sense and doing something to stop it all.
youtube
AI Governance
2025-08-26T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy29htWaxqJDB78gQt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGYLH5aYrwyIkskcF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweOTT0F05j-9FAtnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxNGyO4loNUPQeZflt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwhc49xB8f29Y0bMoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyRBOdvVpDVfOrCjSh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxZQ5z0fSTwahdr1sl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy8i3mSArc_JP5FQn54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyyyuyTzLAkpe6heD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGT3_jPQeL0SR9SV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]