Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's not entirely correct. Poisoning data sets does not necessarily prevent AI…
ytr_Ugyym-wbL…
G
I've been saying the same about AI for years and I have used an example of horse…
ytc_Ugy-vtJL4…
G
A.I. is just another tool a manager can use to leverage against the workers. The…
ytc_Ugw0Y9Cpb…
G
Please dont make robots... some day some people will exploit this and make a evi…
ytc_UgxO12K08…
G
Dude I hate AI so much , it took me so long to learn how to draw, just to have …
ytc_UgwoNsGnz…
G
Value will always concentrate at some points. It will never be even. Not even su…
ytc_UgwPdIEck…
G
The biggest problem with the argument that we should have no expectation of priv…
ytc_Ugw6DnMsA…
G
Art consumers aren’t as selective as (human) Artists hope they will be. When co…
ytc_UgzLKLj_h…
Comment
In the interview (topic: simulation, around 1:00:00), there is a logical inconsistency. The speaker claims that AI will destroy the world, that there is no other possibility, and that there is no way out. At the same time, they assert that we are 100% certainly living in a simulation. These two statements seem contradictory. If we are indeed living in a simulation, it is unlikely that we can be destroyed by AI, as we are probably already in a simulation model controlled by an AI.
youtube
AI Governance
2025-09-05T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNWnvwaRXK-qscyUl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyu4cclj72Cz_uG46p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3KUpQrWqUZvUNSJl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwt_a2AA8Xws3vRxOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugza9CpmVqOtIGtGBph4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCynS_Isbodz9gYRR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKZlfRRNAGHvV2roB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJGAJpGNkoOZbc8AZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzWNGGe5CmZGwji9wx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQbxAz_MV543kHITh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]