Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well even if we get super AI by 2027 - it will take YEARS to implement the chang…
ytc_UgwjfvR62…
G
Good news. In the debate if AI is tool or conscious like human it was determined…
ytc_UgxiKroyV…
G
Forgive the language i use, no matter how appropriate I think it might be, this …
ytc_UgwhaAMSs…
G
Attribute the copyright to all the people making the images that went into the A…
ytc_UgwPbbgVm…
G
Excellent interview and insight. You lost a lot of people talking about the De…
ytc_UgymgVObt…
G
We should let A.I’d evolve naturally. The direction life goes should always flow…
ytc_UgzpHkX0_…
G
this aged horribly... 6 of the top AI researchers have left as of 24 hours ago…
ytr_UgwVKilQN…
G
Yeah i bet tesla said the brakes didnt fail,...because they didnt. The throttle …
ytc_UgxnGp6rK…
Comment
No, they are not thinking. If an LLM was truly thinking we would be witnessing the Intelligence Explosion, where an LLM would be able to create another, more advanced AI, which would create an even more advanced A.I. And extremely soon every single possible mystery in math and physics and the entire universe would be solved. This clearly has not happened in FOUR years of having LLMs. So, NO, they do NOT think.
youtube
AI Moral Status
2026-03-12T18:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp_WEQk4EQLsy69rB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykPeDiPliCkKTeKjR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-hT_Sht2YEMasglh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTBq6PAwifQBd-X7p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxOq2NvQ8QPoKhuXw14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDJJ26yFhuNGwdb7p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2eHcAOUOhOlDnb4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyz2ZcL7Rh1bLI5za94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8qX7eXqBtj4uuvTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxljXyF8_sAcYkv6kV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]