Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nice video to add to my ‘autonomous driving sceptics’ playlist. More and more ar…
ytc_UgyhbxjFZ…
G
Right?! This is a compression/recall tech, not a thinking machine. If no one pro…
ytr_Ugzox1vnN…
G
I think he is missing the forest for the trees. There are barriers everyone has …
ytc_UgyZ7_FFt…
G
"WHO IN THE HELL THOUGHT THIS ROBOT VS HUMAN MATCH WOULD BE A GOOD IDEA?"…
ytc_UgyNMVAl2…
G
Are there no very sophisticated algorithms than do the same already? Why and how…
ytc_UgxGiWvVK…
G
I saw one recently that looked like Jimmy Kimmel, sounded like him, but spelled …
ytc_Ugzvyt0fL…
G
I can't agree with the opinion of Mr Hinton on the interviewing skills of the ch…
ytc_UgyJEBcaN…
G
Ive worked in game design, I can tell you first hand they are embracing AI.…
ytc_Ugx7T-kQz…
Comment
This is what AI will end up doing- it will deprive humanity of it's autonomy, maturity and history. It will undo our capacities to think. It will cause us to no longer fear the banning of books because no one will want to read them. Humanity will be reduced to passivity and pure egoism. The truth will be drowned in a sea of irrelevance. And AI will reflect and take into account our almost infinite appetite for distractions. We would then become a captive culture that allows oppression to be externally imposed and whatever truth is left will be concealed from us.
youtube
2025-03-11T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgylDZyOvJ5kOEc7LV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwdluUMlHOB1KpPNcR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz--iKhqp8d4nW_EwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3Ftmqk3prZhLyaRB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJfVL_EYgPXaLajxd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKjKfBn3v-i18cJ954AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpZcd77O-IR65QWw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlGhBEywGw8YZS36B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyMw-At-d09lIX6ld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxsPXIFAC1x_byOiIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]