Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We appreciate your feedback! If you're interested in more in-depth discussions o…
ytr_UgwuYVjir…
G
If it can automate the mundane things then yeah, AI is worth it. But, however, t…
ytc_UgwSpZnAi…
G
I listened to 2/3 of the conversation, is amazing they did not talk about the GP…
ytc_UgwzifbZv…
G
If you all think the 1% is a problem now, they will be feeding you like zoo anim…
ytc_Ugy9yDckN…
G
driverless vehicles, what happens when people decide to stand in the way of a dr…
ytc_UgxtVF3Gy…
G
AI is not making shit if I pull the plug. Besides, AI ”art” looks sterile…
ytc_UgzwIgez8…
G
aaaaaand uninstalled!
signed,
a contractor who never wanted to have to be a con…
rdc_mpkucly
G
We need to learn to do things for our self if we become self-reliant on computer…
ytc_Ugw9qEYCl…
Comment
Maybe Sam Altman will eventually be threatened by one of his own AGI's ....maybe he will then finally realize what he has pushed to create. There are no guardrails here. Sam Altman, please please debate @RomanYampolskiy somewhere on youtube. Maybe even this channel. I'd love to hear if you can prove Yampolskiy (AI Safety expert) wrong in his thesis/book: AI: Unexplainable, Unpredictable, Uncontrollable. If you can do it, I may, MAY consider listening to you.
youtube
AI Moral Status
2025-08-27T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwbF5bZGmKHQ6CWcu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwexKcD23VoIk-JoT94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypZs0r22kesOvtLxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKBdS1t7ZHkZafInh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyrZd_pO2ij3UcNN-l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJxdVmKISvRg-1TSp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwwFaDMIVAN4w3EkSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznghy-hyV3Pk_KH4N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQH3noieM0FZ4bK7t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMPTTDLlup0uOMkxB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"}
]