Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm using AI to make my game I've written for over a decade....not because I don…
ytc_UgwRxJZ9l…
G
AI may be quite useful, however, I still prefer to learn from human teachers . A…
ytc_Ugzx0uZMf…
G
thats def not the case, its ai it’s programmed to receive input and give an outp…
ytr_UgzHatGkb…
G
Thank you and God bless you for calling peoples' attention to the potential impa…
ytc_UgwAS03D6…
G
Not a single bad comment, what's happened to youtube? Watch "the AI revolution i…
ytc_UgxlZy7bp…
G
I studied chemical engineering in school, then decided to work as a systems engi…
rdc_mt8fjq2
G
i knew it is an Ai from just look at it lmao 💀 how can the men uhm nvm…
ytc_UgyplZXqK…
G
I work in customer service and they are TRYING to AI away my job, but AI keeps f…
ytc_Ugz8icnRq…
Comment
When he says that “we wouldn’t even know” he means that an ai could convince humanity that it did not exist and that our actions were conceived by us. The level of coercion that a vastly superior intelligence could exert over us might be indiscernible from our own thoughts. People keep fussing over an overlord scenario when what we are really afraid of is loss of self volition AND the awareness of it. We may already have this problem from an ET race (or more than one, which could include an ai), and again, we may be unaware if it. A phrase I have used for years is “Skynet is already online.”
youtube
AI Governance
2023-04-18T03:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwYp9jvQz0fCkFT0-V4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEyh-LOKSdToRYeth4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQDmOyMBqXXs12N4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwO1GPHNJ6bGzy3-Pp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyUZ0kw7nf5zSjKjkN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwVmOj69txhtRiM07Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyBisoSk0ahEu2MrTd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdQDQazAiDCoojUxJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw62JoEsixjA4BesO14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw_gCRuw8qyru7DALV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]