Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is just a beautiful tool created to help humans. Humans are the greedy pigs w…
ytc_UgyuLXEx_…
G
If you’re doing stuff that bad with ai maybe you should get off the internet and…
ytr_Ugxvjhr5U…
G
@jtk1996 what else should he take away?
This expert is telling us his future pre…
ytr_Ugx6yjVfg…
G
Please don’t use ai on serious topics it’s insensitive also why are you acting l…
ytc_UgxsDRmV-…
G
Nobody can take a joke for what it is intended. Modern humans are magnifying the…
ytc_Ugw9EDQ1a…
G
I'm a Roman Catholic, but I once listened to an Muslim Imam speak about AI and o…
ytc_UgwaO307H…
G
I'm worried humans will be used to make AI like in the Matrix, because when they…
ytr_UgzE5c4Fl…
G
they praise ai but i promise you if you told ai to make a game with a lot of big…
ytc_Ugz1x7rru…
Comment
first of all, being as imperfect as we are, we shouldn't try to achive AI with selfawereness mainly for 2 reasons: N° 1 we should consider trying to solve our problems as a divided species first. when all humans would agreed with each other and petty problems like those which roots are economical benefits disappear, then we, as a species, should try to create AI! N° 2 it's likely that would happen what every movie of cyberpunk showned us, robots would determine that humanity is sthe biggest threat to the planet and try to exterminate us!
thats my opinion! we are greedy, even with the knowlegde and we should be carefull...
youtube
AI Moral Status
2017-02-23T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgivyiW_ofw5JngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjA4P2zvsANW3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjgGblLUS576XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjKd8TV71HVOngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgibNi4ZcKPCQngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi3O5ApRuIrVHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghEXl3QjOuqa3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghvTASVmKa1qHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggpYqw-oawjaXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggeerIRJVSAs3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]