Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so why doesn’t the government regulate AI… instead of let these companies develo…
ytc_UgwtY6B5c…
G
Ai is getting too advanced. I can’t believe it can now create mini games now…
ytc_Ugy5UP6MG…
G
I really don't get why so many people hate AI art. I just think it's because it'…
ytc_Ugw1VciW2…
G
The difference between AI and a Van Gogh painting is you can only buy one of the…
ytc_Ugx0EL7jw…
G
I mean this is the nicest possible way: I'm glad AI art "took" that "job" from h…
ytc_UgwVNkIBS…
G
@nimbusloud I didn't really have a point. I was mostly commenting to give enga…
ytr_UgxAlDIl6…
G
49:18 “Why can’t we just make an AI that doesn’t want things? Let’s just make on…
ytc_UgzjAVsuC…
G
i rarely use ai and i only use it to answer more specific questions and to lear…
ytc_UgwjyGO-2…
Comment
People hype AI, but they don't realize the ridiculously difficult hurdles and fundamental challenges involved in creating true, conscious AI. The "AI" we have now are just nonthinking computers scripted to give the impression that they can hold conversations with us. But they can't actually think, they don't understand the words they're speaking or hearing, and they cannot learn in any true sense. They're smoke and mirrors. A nice trick, but still a trick.
We have a long way to go in understanding the human brain, and even animal brains, before we can even think about trying to create real artificial intelligence.
youtube
AI Moral Status
2017-02-24T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjkJ5oGO9Wrg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugixgzq73KpX43gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh5JFZ79nf9MXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgicBH5REIL6ZngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgisEJ6s7i1KOXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggmBsI9cRijcXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghdMxvyt73s-XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjF9I1mY-z9s3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiL4ECa6MeGC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh3qhnb7IodFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]