Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember reading a science fiction book a few years back about AI taking over …
ytc_Ugz_QWt23…
G
Robot : Hope i made you proud.
Human : Hell yes!
Robot : Now you make me proud.
…
ytc_Ugwccth72…
G
Where are the estimated case numbers from the all the funeral urns. There were t…
rdc_g9usho8
G
Not impossible. You just need true AI. Something that we still don't have despit…
ytc_Ugwr77mVI…
G
Ppl need to prepare for a type of mad max scenario... because its coming. Once e…
ytc_Ugyt1QLaz…
G
Welcome to my world. I’ve been in the video surveillance industry for over 40 y…
ytc_UgyLvPXqj…
G
The faces don't even look alike. Just look at the brows. Maybe facial recognitio…
ytc_UgyW3BTG4…
G
as an artist i feel like AI could be a really good tool for reference, but since…
ytc_Ugx3yVHk0…
Comment
These predictions seem mostly based on the premise that AI is motivated to kill everything, to behave like a psychopathic human with unlimited power. If it's so intelligent, why would it be so stupidly destructive and evil? I'm not sure we can presume AI will behave in this way. If a psychopathic human 'controls' the AI then it seems unlikely they would wipe out all humans because economically there would be no market for them to make money, and they would have no-one to lord over and no 'power'.
youtube
AI Moral Status
2025-04-28T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyTIJ6Dtc7bKpNBMkJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOGfUx9D8UyCVQg-J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFsDWW0Y9_w86bH4F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzwfZJGyDHSrnf5z3d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFmYc5VOQe7e3uGFF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxsg0HY1gp0Duul58x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyLrOAUJeOKBb4t2pJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmf4W8mOZ8-TAmi694AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwe-jCbfvlB5RGvgIF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq9Fk5rrWu5sUX-td4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]