Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And the creator of the deepfake website confirmed that the content Atrioc access…
ytc_UgyV5jw1J…
G
My only gripe ive ever had with AI art is that people that generate the images a…
ytc_UgyjSrOix…
G
In a perfect world, AI wouldn’t reduce the number of jobs as much as it would ju…
ytc_UgyhPETlA…
G
yea but we can fully control the amount of AI robots because we would have to cr…
ytc_Ugi7kG8Ji…
G
The jobs they listed are NOT the ones that are easily automated… chefs, waiters,…
ytc_UgxiJtlgx…
G
[ IN 1990s ]
Artist: NOOOO!!! THE DIGITAL ART WILL DOOM US !!!!!!
[ IN 2022 ]…
ytc_UgyUNYFOj…
G
As a professional illustrator for over 40 years, I personally enjoy drawing f'd …
ytc_UgyBq9Eei…
G
For me, right now is a matter of power, as Elon musk said, if he doesn't do it s…
ytc_Ugy9dYXjt…
Comment
Hinton is either grabbing for attention or he's just old. AI will not become conscious, because no one is even attempting to do that. Your chat bot is two neural networks combined: one does next word prediction, the other generates human-sounding speech. Neither model "knows" anything, you are just talking to statistical word predictions - the rest is entirely in your mind. Actually learn how LLMs and AI works, and you'll realize everything can be explained as a form of lossy compression, nothing more. Hallucinations are not a bug - they are the only feature, the only difference is whether you like or agree with what the model predicts. Stop fantasizing about consciousness in AI, please. You don't know what you're talking about.
youtube
AI Moral Status
2025-06-08T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwMCVb0jmxiKKDBGFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyo4VuSFxK-G5m5emx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCy7CUEsc72g07kid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWWo8Ofrq62S4JJK54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw1X9xTxowLrEd_2NR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlVQSTgN8EiVuZdNd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzRSD6P-SwzeqOpYht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQzJu4eVuT9Iwo6LR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-hQjKsXsX3TpYzjp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXTrgWo7_o8ZqE8F94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]