Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That is fucking haunting. I don't doubt that these are merely the opening salvos…
ytc_UgwhuSzku…
G
I don’t know. Cold war, oil crisis, Millennium Bug , 2012, now AI taking over th…
ytc_UgzGVWbRC…
G
How about the guy that made a one piece ai image and then was absolutely demolis…
ytc_UgzalfDE2…
G
A robot with AI intelligence at the same or higher level than human intellect wo…
ytc_UgwYFFF8L…
G
I work in data entry, I am currently working with an AI company (so they can lea…
ytc_UgzbOg1Rz…
G
If you're already working for free you don't need to be afraid of AI taking your…
ytc_UgwWpdO5B…
G
We appreciate your thoughts on the potential risks associated with AI. It's cruc…
ytr_Ugy-YlgxJ…
G
@AIUser-t7zI don't know how you can describe them as primitive. We don't actual…
ytr_Ugw3uziqs…
Comment
Your comment at the very end about "a difference in-kind" between the current algorithms and "intellegence" is fully valid in my opinion, but I don't think it's really what Nate's getting at; His talk of superintellegence would I believe involve a different type of algorithm. For example, these models were "trained" by feeding them certain types of data, but it's still very different than how traditional intellegences learn. For example, they don't know "logic", which is a crucial variable here. But could an AI be made who's "units" were not individual word pieces but instead logical concepts? Maybe
youtube
AI Moral Status
2025-11-09T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCfgXOWqj_QckvzY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1PcgxyRpO6yFePBd4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgysOsgfV69frC13hlN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzwr_KSzvipseA0Au94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyl9pZdZa4uSa23sUZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwB_5LjgvmB9LLCc3d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy03wl9LdwnUgQDn0l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzURJ6yX_tzv56jRcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyySM7JDt6YFvJjZd54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8jelfArGzHzPt87F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]