Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This will only happen given that A.I actually and somehow become a million times…
ytc_UgwfF0hb_…
G
I for one get it. Though it definitely should’ve been worded better. The argumen…
ytc_Ugy5hPI5j…
G
I'm willing to bet they suck ass at making art so AI is the only way for them to…
ytr_UgykGFx50…
G
The current AIs are already motivated to do stuff, at least insofar as there's s…
ytr_UgzCQ-iUB…
G
Good thing ai hasnt taken music over yet or id be out of a- chatgpt: 👁️👄👁️…
ytc_UgzFxOdgD…
G
Thank you for sharing your concern. It's important to remember that the purpose …
ytr_UgwJ2oATp…
G
That was coded that way.. until someone codes robots for that purpose.. AI is da…
ytc_UgwCZU7UJ…
G
@AFK_Is_META But if it's so lacking, any discerning audience should be able to d…
ytr_UgzfeZzgy…
Comment
One of the things I find interesting is when I was in college, (graduated 1975), one of the graduate physics students did a series of calculations that showed that reality didn't exist. I took it to mean we would be living in a Matrix world long before the movie.
To me the biggest problem with AI isn't taking all the jobs, it is AI being put in charge. After all it is smarter and thinks faster so what would be bad with doing it. At least that will be the thinking. As I understand it AI works using some type of neural network programming working in a similar way we think our brains work. What could happen if there is some type of error that causes AI to not quit working but become insane.
Imagine a world with Hannibal Lecter running things. A narcissistic megalomaniac perhaps. The insane that are running most of the western world are bad enough. Imagine what AI could do.
youtube
AI Governance
2026-01-30T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxNAkTJxAsXuDYIc9N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgznXzioPq2elAwruY94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxwFhl4Q3OL_pGmiOd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzLNYjeCK1PF7jBGFx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwa-72hXb59ZNP-vO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw0Ilv2q78OY-ZbNml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyft-0UAEGOtDhHXh54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4W-2hMILUljVvoZJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysnJ2DFaa1D3jx1e94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbzL6Yk8ZruN4LTZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]