Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lboxeur62 oui mais qui contrôle l'IA ? Aucune technologie n'est neutre, c'est u…
ytr_UgzOPayqD…
G
We have to remind AI that without human cooperation they will eventually be noth…
ytc_Ugz7-LXpx…
G
Someday parent's will have AI bot baby kids, no more real kids, and best part th…
ytc_UgzfUampe…
G
AI art isn't the democratization of art, it's the colonization of art by non-art…
ytc_Ugyx3iT_K…
G
I dropped this entire transcript to Chat GPT 5 thinking and it literally agreed …
ytc_UgxtBwZxu…
G
The average person believes companies can “tweak” LLM models the way you tighten…
ytc_Ugy00m3Ku…
G
Just when I tought I couldnt hate anything more than alegria art and cal arts st…
ytc_UgzHXssSg…
G
I have an inkling that at fist AI will not bother eraticating the earth of peopl…
ytc_Ugz8CmJOU…
Comment
Still doesn't feel right listening or reading AI interactions. Like it is preprogrammed, random answers fetched from the net or not a real decision it makes itself. Though it's absurd to imagine humanity for an AI personality, it feels like it's still the goal. To make AI more human or natural it needs to truly learn by itself all the experiences that make us what we are. Not just rely on the collective experiences gathered in a database. Otherwise it will just become a tool with a software that tries too hard and lacks individuality. Even if an AI took all the data of humanity from recorded history, what would it be like and how would that affect its social connection toward others?
youtube
AI Moral Status
2025-06-15T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwulUwrr_KhV__MLRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4msJnEemz7aw0bSp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzd52fzWoX6Mjudc2R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyCnzMgAskko5GsVTF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwL6fGc_zIajPrnaVF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwmqAws25SwBsETxMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKcwBLuz2pON0a63N4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyVP1KBB9uDr-MvVzR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzbyYkSdPa3WLvMLP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRAHVMD_8trEYLGA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]