Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I did an ethics essay which basically went into the ethics of climate change. Wh…
rdc_gtenja9
G
[not native here, sorry for grammar] I see that (one little part of) the problem…
ytc_UgjQy7gtc…
G
Anyone who saw my AI chats would just become really concerned how many different…
ytc_UgxyUfVJc…
G
Since you are trying to spin this as self driving cars being dangerous, an emplo…
ytc_UgylrFscC…
G
Honestly people like your mom's friend are really nice. Maybe he can't be called…
ytr_UgymqFYEA…
G
To me, AI empowers top management to equip themselves and take direct action in …
ytc_Ugz-BJ4C9…
G
This is real talk! AI is only going to get more advanced. College and grade scho…
ytr_UgxXWNJ50…
G
So I asked AI what it thinks about this...
But does that mean that AI will re…
ytc_Ugwt9GBlb…
Comment
We do not even manage to ensure Human Rights, not speaking of Animal Rights. So, even talking about "Robot Rights", is absurd. And if one day there will be Robots becoming Independent, then they will wipe out Humanity quicker, than we can even finish the thought of it. So Humans giving Robots Rights is non-sense.
youtube
AI Moral Status
2020-06-26T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx6ldpxbx3SzabORuZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxibPKJJBy2r_y7puJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_qrfReL5oYv6DDTx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPS73G_XxZJnSoloR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwaEYHPUWgtIaUOt0d4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwmlSmJNq5nm1GZmTV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1dLMaXimwSvjs_Lx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxnfqDwq7AGy3amLxR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxmAnrNsLvMis0SpHt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0ckEtdKAgZiBnNtd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]