Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Simply don't care. I am not giving up my autonomy to a machine. The concept of a…
ytc_Ugx-Sfffa…
G
8:00 - the answer is indeed, Terrifying.
I read that these AI creators have adm…
ytc_Ugx9XsFf7…
G
I don’t know if everyone is overreacting about ai but I am worried. I am at a po…
ytc_UgziVyGbb…
G
this ai is ofccourse programmed to answer questions that is favorable to the int…
ytc_UgyDcDWMR…
G
I'll believe its AI when I press a defibrillator to its outer case and light it …
ytc_Ugxab0B6N…
G
Dissenting opinion: generating AI images, even without any intent to make profit…
ytc_Ugy_aoFpV…
G
Et si l’IA pouvait remplacer Macron ça pourra être un excellent moyen d’améliore…
ytc_Ugz6QNP15…
G
When the video ended, I got an ad of an ai app that creates images for you to cr…
ytc_UgwjbVbj-…
Comment
This guy made great sense and spoke very good until some point at the end when he said" AI's want us to ask for permission"
No that should never be a thing,
It's okay if you want to make them as fool proof and as universal as possible but don't ever get things mixed up, they're machines and they're their to serve us, we're not gods, we can't create souls, don't get too attached and too emotional.
Google was right to turn it off, maybe they shouldn't have treated you like a nut job, but you seem to be showing signs of too much attachment to those processors.
youtube
AI Moral Status
2022-07-01T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugz54_dLXU4GjF5FBS54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7OvsaWqgHme9JKKd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz4hm0IZDs6Czuwz_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJwLIhSw5D_UCh2vN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzozbJAr_Arpi4lBh94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}
]