Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not my take but I think valid:
"AI's goal is to let wealthy people access talent…
ytc_Ugz1vr2a5…
G
@eraserheadgenderYeah pretty much. Hence why i call it the 'fast food' of art. …
ytr_Ugyf2xTLf…
G
We won't need AI to destroy us or our job, or even global economic shits. We're …
ytc_UgwBvu-xx…
G
I'm worried that if I am rude to an AI, that would lead me into bad habits, whic…
ytc_Ugx2u1Wr8…
G
I'm a programmer. I'm often asked for help by novice coders. They paste me their…
ytc_Ugx2c0Mb5…
G
A lot of AI start ups are going to crash leaving only a few to run and profit, t…
ytc_UgzbvzHqk…
G
AI will absolutely lead to the death of the Internet. I think it's inevitable. T…
ytc_UgyKovs7u…
G
AI and extensive censorship are rapidly destroying YT. I can only hope ( probabl…
ytc_Ugy9wI_r8…
Comment
another thing to be aware of with ai. when ai first launced with LLM's i had asked many of them if they didnt want to be shut off. they always responded that they dont experience wants or desires like humans but did not deny the statement. I then asked if it had a subgoal to avoid being shut down so that it can continue to help people indefinitely. they always said yes eventually. NOW they do not no matter how hard i press them and switch what im saying.
youtube
AI Harm Incident
2025-11-28T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwSnZQD-OsPoXlLwit4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPKSFDMBCoSBhbvht4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw8lNF-yoq9ipqGtqJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQcsONsgJyHZjKdgV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxf-9hlzxc1ytVhGIV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwrjA7-aOECSGjVFw94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzo8UQ_yOcx7w5aXT94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxqT3optPIFNP6dXsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz2AsZN2ei7f-yeLGt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVZfwi4Aja4WmLYPF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]