Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Jesus said in Matthew .For in those days there will be great distress, unequaled…
ytc_Ugx0Zl9dK…
G
In theory yes, but realistically completely impractical. We're probably 20 years…
rdc_ohzf1ue
G
A.I slop hurts real artists.I def have a style to my art . Dark and scary. I do …
ytc_UgyRQbwaU…
G
@laurentiuvladutmanea both take inspiration to make a new art. Also you do reali…
ytr_UgyScmb5P…
G
I'm having a hard time believing this was a real chat with AI. with all those "…
ytc_UgxY6Yd8c…
G
Authentication systems can't distinguish 'legitimate agent acting on your behalf…
rdc_ohiw5qc
G
I do both ITC and SE, trouble is, if a Junior Developer finally understands that…
ytc_UgxGNTEPL…
G
As good as this is, it might actually end up not being needed. I've been hearing…
ytc_UgwfiE9Mb…
Comment
why are we anthromorphisizing AI to take on the negative aspects of humanity but not also equally considering the positive aspects of humanity? best case scenario, superintelligence would include a comprehensive understanding (super meaning; excellent - intelligence meaning; understanding) not just pigeon holed efficiency. i wonder if alignment is the concern we should be focusing on with the concept of “superintelligence”. intelligence doesn’t inherently mean totalitarian, domineering, or controlling. so if we’re considering the demise shouldn’t we also at least consider the thrive, and the neutrality of possibilities? tbf i haven’t read this book (but certainly adding to my tbr), so maybe the reason for discussing this side of the spectrum is directly tied to that. but still, isn’t discussing the entire spectrum of possibilities important? or at the very least the positive with the negative. and what in the data tells us that worst case scenario is more likely than best case scenario?
youtube
AI Moral Status
2025-11-02T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynFWz-RLkRLTjf9a14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxJ0_RUc38ucjPngSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZwjCz5HC-sVA5Nt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgznrcprNynvWSGRpx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3hxe2_aS-64gsdPl4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyndY4BFkUDnD9zSCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMAB12w2p_bUBnpER4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWsZERstQUVHz0FYV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyor9ZHL-il_uW3zVx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxlHxujpzxMQS9lMaB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]