Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
90% of Anthropic's (the creator of Claude) revenue comes from other companies. S…
ytr_UgwcMKwjp…
G
Artists already get so little appreciation as is, and now with this AI issue? It…
ytc_UgwEehaEh…
G
I don't think a human would have avoided this accident, but I think a good auton…
ytc_UgwyO7RFw…
G
Wow aren't you condescending and you wonder why people want AI to replace you lo…
ytc_Ugx7tsv64…
G
The real controversy is that why it makes humans have weird fingers? Is that how…
ytc_UgzWLhjg0…
G
Saw the first 10 secs and went to try this out. Same question and I got this inf…
ytc_UgwtxCAWo…
G
Ai will definitely lean towards whipping us out BUT i think the real question is…
ytc_UgxUXjYxd…
G
I would say yes, it does hurt artists since those ai characters do come from pie…
ytr_UgwpJk_Gu…
Comment
One key issue that gets missed in a lot of these discussions is an examination of the assumptions underlying the predictions.
The thing that all super intelligence doom predictions have in common is an assumption that intelligence can keep scaling up far beyond what humans have today. Sure, computers can scale up processing speed far beyond the speed of the human brain, but intelligence requires knowledge and interfacing with the external world, and those things are much more limited. It may very well be that a super intelligent AI is just one that matches the thinking of the best humans, but can come to the same conclusion in a millisecond instead of a second. Technically that's 1000x "better", but not really in any meaningful doom scenario way.
youtube
AI Moral Status
2025-11-08T22:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyoLMOtFEl0BjclbeR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzR9dGuib9D_x6R59p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugympg2UI7opOBtLvgJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugzuuhgjo8G0oiToXph4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMrfvsAJmY7PPPcwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyfCmtRz7UT1gbgipF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLVZvZD0hg7hoMuk14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxMQFPc9jODJB5LPnp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzVw3UdEoIdpKs29x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlOENpGXgOKmboT9F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]