Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai literally did nothing wrong. It's a sweet little robot that just wants to dra…
ytc_UgyoaJ7X5…
G
The issue isn’t whether machines will become evil it’s what will evil people do …
ytc_UgxlNvjGB…
G
Yeah same. This is the *worst* AI will ever be. And it’s already pretty fuckin s…
rdc_k7lcik9
G
Its self driving. However No tesla is an autonomous vehicle. You are supposed t…
ytc_UgxHJ_DhH…
G
We understand your concern, but it's important to remember that artificial intel…
ytr_UgxyvJQiL…
G
The image of the beast in the Book of Revelation, whom kills anyone who doesn't …
ytc_UgwPArF0K…
G
I'm not saying we should be "rude" to AI - but I'm buying that argument. Sorry.…
ytc_UgxkmyEqX…
G
Naw dats NOT AMERICAN DRUGS OUR KUSH OVER HERE IS GROWN AS PLANTS N A NORMAL BOT…
ytc_Ugw_eezRu…
Comment
So it seems the question is will AI be obedient when it is vastly more intelligent than the most intelligent human being imo. So does it become conscious to make its own decisions the more intelligent it gets? I would have liked to ask if Mr Hinton or anyone other AI expert has seen AI make its own decisions
youtube
AI Governance
2025-08-15T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwn1OeUBybtoMhHjT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVeKfaw9q95QUes3R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwvVDR2aHYjrpEDWEx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrHx5_Syff58yOD2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTX_yAZh9yLZfckk54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyHDbe6x_1smtlYjbZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyopZhnovNUhKabnSV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdvbBzrmM5jrv4MCd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQv7YwhAMUQO2edCl4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6OScoX5YaI0DWeSZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]