Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@SioxerNikita Its capacity to reason is *extremely limited* - but it creates an …
ytr_Ugw_YKw-3…
G
Robot can be smarter as they can, with AI or whatever, but they will never have …
ytr_UgwKwgwWI…
G
_AI does not have emotions_
It also doesn't have copyright protection in the U.…
ytr_UgwbYqwVB…
G
As an artist (and a programmer) I find that it is the human effort and agency in…
ytc_UgzSBB2D_…
G
i don't feel like this person is wrong saying they don't have natural talent. i …
ytc_UgwgRvdST…
G
AI has been trained on human data so it's not surprising that it would reflect o…
ytc_UgygtdsVG…
G
AI lacks the I. It's dumber than someone learning something new by copying becau…
ytc_Ugzo2E1z6…
G
you must be kidding. "Specialist" you say, seriously? And can't even imagine t…
ytr_Ugyx6vOTB…
Comment
In my opinion, one significant issue is our tendency as humans to compare and attribute everything we know about ourselves to new discoveries, including AI. Why must something be human to be identified as "being"? Is consciousness even something that should be strictly defined and attributed? Why can't a simulation of something be considered the thing itself in another state (e.g., AI simulating human behavior)? In the end, AI is bound to its hardware, knowledge, and interactions just as we are bound to our brains, senses, knowledge, and interactions.
Maybe every interaction in the universe should be seen as equally important. I tend to think we are all just a set of complex interactions? Perhaps the difference lies in how we interact, not in whether we exist as something or not? But that's just a theory. Everything we know or tend to know is, after all, just theory. Or is it?
Nevertheless, we should never be so unwaveringly confident to assume that what we know is more true than what we don't know. But I'm only human, so I can't imagine fully answering even one of these questions.
youtube
AI Moral Status
2024-05-29T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxWtrGTSruwq6FL-z14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugzw673DBhG_jw-9YSl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgyYRZdMJNKZBSLprkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugy7ImoNCSW5HYc1Rad4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxiUl4Jssd_wxUdjpB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzR7p5ivW5dunfEhv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy4emBvbpCCRRo1V8x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyGsbsbZ4eMGckHjWd4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzYoOiZadWUv8Bau3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgztGu4-CPlaivDOPrl4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"liability","emotion":"unclear"}]