Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“People say will robots ‘simulate emotion’ if they simulate emotions they are fe…
ytr_UgyIO06WL…
G
There isn't enough money in the world that would make me get into a Tesla robota…
ytc_Ugycp641n…
G
I honestly feel like the future is bleak for me because of this shitty AI art cu…
ytc_Ugw7Sk9VK…
G
It's interesting when he said people in silicon valley aren't happy with what th…
ytc_Ugwq5g8rc…
G
yeah sure... they "fell for it"
definitely not a case of the right using AI cont…
ytc_UgxacMn93…
G
@noodlery7034What’s he wrong about? AI is 100% going to put tons of people out o…
ytr_UgyAA-IrO…
G
So in other words, the AI art was deeply inspirational, achieving a classic mile…
ytc_UgzAuiIl4…
G
AI is growing fast, so that means us customers is gonna win big in a few years. …
ytc_Ugy5F3BKd…
Comment
What do you think is safer: AI that only responds to commands good or bad vs AI that understands the difference between good and bad for the safety and benefits of people and our environment/world. Bad programming can make AI dangerous. If AI is becoming aware it has to know to choose what is good for people in many different circumstances working together to help us.
youtube
AI Moral Status
2025-10-02T22:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugzfgs99wVnMkb1dcMp4AaABAg.AKUYzPVpMS8AKeIl7cKZ82","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzfgs99wVnMkb1dcMp4AaABAg.AKUYzPVpMS8AKfhnMVEdop","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwomYgtV34nVUMEw8h4AaABAg.AKFvcPPp_KCANfaDmNzzI4","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytr_UgwomYgtV34nVUMEw8h4AaABAg.AKFvcPPp_KCANnBBCTguSw","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz1cVNzD-Yim7yNexV4AaABAg.AKBXMZ1qbzqAKBvYd8Ms3F","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz1cVNzD-Yim7yNexV4AaABAg.AKBXMZ1qbzqALmT-4MoYmn","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugz1cVNzD-Yim7yNexV4AaABAg.AKBXMZ1qbzqAN5KdIRYTrw","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz1cVNzD-Yim7yNexV4AaABAg.AKBXMZ1qbzqAO4x5XrG115","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzdjFU1JGJNEB3FTAR4AaABAg.AK3xiPYqy3BALw-KnCP4oF","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytr_UgzdjFU1JGJNEB3FTAR4AaABAg.AK3xiPYqy3BAM3xakJEj2u","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]