Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate the stupid answer of "learn a trade" when it comes to AI taking jobs... w…
ytc_UgyU3-n3K…
G
Have you considered that perhaps we have passed the point of the singularity and…
rdc_jprwnmy
G
It should be illegal for nutritional brands to use AI to promote their goods. Sa…
ytc_UgxckhZXH…
G
the way chatgpt stumbles over its own words like a real person is so unnerving…
ytc_UgzRC9sY9…
G
PCPolice No, it’s when you’re scanning a crowd of people trying to find a partic…
ytr_UgyfWLlQL…
G
I used to do work with Ai image generations and it's honestly gross. Stable diff…
ytc_UgzrRS-q9…
G
Facial recognition software have the biases of the software authors baked in. Lo…
ytr_UgzqAtbti…
G
I assemble small intricate parts in a factory, this is real work, my hands and a…
ytr_UgxitIgzo…
Comment
For the longest time, I have believed in two ideas:
The first is that AI will never become omnipotent or godlike. Because it is not truly alive, and therefore has no concept of what it really means to be a sentient being; it has no soul.
And the second is that I also believe AI will never ultimately take 100% complete control of humanity. Due to the reasoning that “I created you, I can destroy you.”
youtube
AI Responsibility
2025-08-10T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxkelPtOCFuII-wmTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3KvYLsvdSfANH2kl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVmuChrgBFsKYvpKR4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUIxi8-g6q-ta4sYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXgOn9X7jETy7rTXR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziFSzKg8okvLk_oP14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBZG3YLA7IRzRLgKl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyVlp5Nq5oOajiZMyh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwIZpsB8Nk1m9vJlgB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySfs2vp2VvBmYtRCt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]