Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One thing you didn't mention is the electricity costs we are going to be feeling…
ytc_Ugx5fr1Tw…
G
I wish there was a neon blaster and j like one of those robots that make robot…
ytc_UgxcxcNHI…
G
The number of people who think AI Therapy is a good idea is horrific. I understa…
ytc_UgxwdiwcZ…
G
To quote a song, "AI is the worst waste of potential since the ninja turtles". Y…
ytc_UgzUqNBq1…
G
Clip one is definitely ai and clip two is definitely real like who can turn into…
ytc_UgyY8uHER…
G
Terrorist have infiltrated our country, they must be found. If AI can find them …
ytc_Ugw4KWPTO…
G
It’s like
For example that robot girl you showed
Wow
The concept of a humanoid …
ytc_UgzaDnSLn…
G
You have to be a vicariuos to chatgpt. Then give room for making a conversation …
ytc_Ugxibeduw…
Comment
Disappointing cannot begin to describe this interview let alone this video. I'm glad to have listened to this brilliant man, because the thumbnail and troglodyte title almost pushed me away. To clarify for those who want to watch, and also to save you from this horrendous interviewer, they speak about a very common theory in computer science and machine learning, Gödel's theory, and explain the basic principle of why at this moment AI consciousness isn't feasible. It not that it is impossible, as the video dumbly puts it, it is just not within our capabilities as of yet because computers are incapable of learning and understanding information. They are merely able to store it, correlate it, and reference it. I will say I am shocked I haven't heard of this computer scientist before, he seems very approachable with his understanding.
youtube
AI Moral Status
2025-07-30T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgysCaw2IlAL8GU5iT94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxkaTkbIhgimwwFXHl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzT6fElH3xvnRyUGxF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrPkMWU5cMqAqQe8N4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxAJEhKtwHzWI4HpvF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwOTJc772mlExwRbvh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhG8lqQ-EGq3-x8wV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw8s4M98z7e2dcdwe54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZY6cb-y_jb-EpAo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBQp3XCuITb3c9X194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]