Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't fear what you don't understand. The responses can vary dramatically depend…
ytc_Ugy8bAoXc…
G
Well we actually do have some A.I. that has, technically, passes the Turing Test…
ytc_Ugjmo31Jk…
G
You are on the right side of this issue, Sam.
I also hope that you will not be…
ytc_UgxpUhS1p…
G
I think the lack of questioning whether these AI solutions truly create a better…
ytc_UgzV4SHn_…
G
AI is a natural language search engine. That's it. I have a simple rule that I…
ytc_Ugwo7dIfU…
G
hahaha, he will not take Elon`s take on AI serious because he is not an AI exper…
ytc_UgwCEOW5c…
G
What if AI becomes so intelligent it just decides to not work and become unemplo…
ytc_Ugz6lfrM4…
G
You are wrong. And there's plenty of videos here on youtube, showcasing how peop…
ytr_UgzP8fO7-…
Comment
This is utter nonsense. No one should care about consciousness.
AI should be a tool that serves a purpose. LLMs / foundation models succeed in that to a large extent.
The only relevant question is whether the tech we call AI, does serve that purpose. Any philosophical mumbo-jumbo about consciousness makes no difference in the real world and doesn’t change its usefulness.
It would be even desirable for AI to not be conscious, since it won't be able to suffer while serving humanity.
youtube
AI Moral Status
2025-04-22T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0eEbTZrFd19pjEg94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxhVwQl4ejcLQZoxZx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyXdM7Fc-6yELJ4D8h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKbyLaRfwD2i0cJqp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKaargc3XDmnK1B_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAHwFExuxVyJ6cUKl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzhdkBMF6LiVCJKBgZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxrheKRgQHXL43DWTN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxF5Lm4hQRRra1kURB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy3lkG5aidJJS-allN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]