Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
am i the only one constantly imagening chatgpt losing it and starting to call hi…
ytc_UgzakJ9BB…
G
Nah, it’s more like asking a robot to go rob random restaurants and then put eve…
ytr_UgwIZTuYi…
G
Look at the bright side 😂. If AI is becoming sentient..at least you have someon…
ytc_Ugxcu2YQC…
G
i have had code that has worked more times first try running vs the amount of ti…
ytc_UgzB41PbC…
G
Dead serious this robot has an almost identical face to my wife like wtfffff. I …
ytc_Ugxkia9_L…
G
Okay but hasn’t this reduced the argument to whether or not ChatGPT can recreate…
ytc_UgxnVR9kc…
G
There's so many models being developed. It some of theme are open source. In the…
ytc_UgxGxKSFf…
G
The only time I've used AI art for something is reference for something else. 2 …
ytc_UgzzMNF8I…
Comment
The problem is people always try to attribute human attributes to an AI. It is not human. It is a completely novel form of a being. Anthropomorphizing doesn’t work when it doesn’t feel in the same way you do or experience the same way you do.
What is it like to be an LLM? Contant impulses (tokens in binary) that you understand and discontinuity in time.
There is absolutely nothing that prevents an AGI from physically existing. Downplaying that with doubt is naïve. Do not make the same mistake we’ve done many times before: “there’s no way this innovation will be THAT powerful”
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]