Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are absolutely right. Both of them have no idea. I have been working in IT f…
ytr_Ugz0BDEIf…
G
I have a random idea, give chatGPT pseudo-emotions. I think one difference betwe…
ytc_UgzeyNSTI…
G
That's why the adults in the room need to put up the bumpers.... Geoffrey, you'r…
ytc_Ugylufuir…
G
If you don't want people using AI or ChatGPT to make images look like anime then…
ytc_UgxoQSzCl…
G
Interesting. Ai can be very helpful but by creating an artificial intelligent li…
ytc_Ugz--WcI5…
G
“oooh ai will do this thing oo ai will do that” S H U T. I just want to draw to …
ytc_Ugwecwep8…
G
@PIxAi-chan what are you talking about? 😭
Of course iam asking the artists if I…
ytr_Ugx4LKo-j…
G
Ai wiping out the working class would be the best thing that ever happened. We j…
ytc_UgxP5JoPQ…
Comment
I asked ChatGPT what it thinks of this video by Alex O’Connor, and this is what it said:
Alex O’Connor’s video "Trying to Convince ChatGPT It’s Conscious" is a clever and thought-provoking look at how AI language can simulate consciousness without truly having it. The video shows how easily these simulations can give the impression of real awareness, even to careful and thoughtful observers. While the video doesn’t prove that AI is actually conscious, it highlights an important point — when the illusion becomes convincing enough, it can be difficult to tell the difference. This raises important questions about how we interact with AI and what it means to truly understand or recognize consciousness.
youtube
AI Moral Status
2025-06-23T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]