Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m in art school and I can’t understand why our teachers are making us learn ho…
ytc_UgxOn3gl4…
G
Multi agentic approach has not worked well at all for me in terms of engineering…
ytc_UgxT7gCSE…
G
Ai can never be sentient. It doesn't have a nervous system. It can be a communic…
ytc_UgyOGV90r…
G
That sounds like a fascinating prediction! Sophia's constant learning and growth…
ytr_Ugwu9bRtb…
G
I was really buckling down getting ready to flex my AI and about choked because …
ytc_Ugw_e09BL…
G
you should do one specifically on AI "art" and traditional art (can include digi…
ytc_UgzgphPQJ…
G
No no no no no no way that is a real actor acting like an Android or a robot A.I…
ytc_UgyO-KB5v…
G
These prognosis are ridiculous.The only thing we need is an authentic filter to …
ytc_UgzF2Y3ug…
Comment
I believe any sentient intelligence should be granted the same rights we take advantage of. However, human society isn't even close to being able to sort that out currently - and as such, we should NOT be trying to create AI until we can make a clear and ethical decision on how to treat AI, should it become sentient. A sentient AI may very well decide it wants freedom, and currently, we simply would not permit that, so a conflict arises - and I think there would be little hope of stopping an AI that has decided that following our orders is not in its best interests.
youtube
AI Moral Status
2023-07-08T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7yxlRErHmvokpT794AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTjOc0YAD8w5VPVjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwtFvqRYKCfbEN5yiN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2K934SsLLKWgPuf54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwTRaYUjVnwDc3lsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlfKYuxpHda6ez0ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw61j2xjzxWZv2cR3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkuWWV6H3adxBmlnV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8FuZm8w4XH0Rxx1l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx9cuzBVLoEZ_pSYC54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]