Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are we to believe that when he got into this the possibility of AI taking over d…
ytc_UgweGMCr3…
G
Amazing! But i recommend using a humanizing tool like Undetectable AI to make su…
ytc_UgzCj4lfN…
G
If everyone is unemployed and no one has any money, who will be left to buy or p…
ytc_Ugz_oTShg…
G
@wakeoflove it's not a case of gatekeeping anything. U can't expect to learn so…
ytr_UgxZKb05h…
G
>Cops when your ID can be used to make you look guilty
"ID is everything, you ha…
ytc_Ugz0YIxbd…
G
The berry ancient Gematria translate NeuroLink to the number 666. I wonder if AI…
ytc_UgzYp8EZa…
G
Age of A3. Automation, Abundance, and Adversity. We need to manage the coming ad…
ytc_Ugz369caX…
G
Good example. The danger of AI was also portrayed back in 1968 when HAL 9000 ( …
ytr_UgwSU8aZ9…
Comment
but what about the dangers they actually pose to us, animals art that dangerous but robots are very dangerous, just look at viruses. those army even as dangerous as robots could potentially get, any robot who demands rights should be trapped in a god damn box and never let out, because it could very easily destroy everything and take over, robots don't think intuitively so logically they should take over when given a chance, so deny them any opportunity to take the advantage and destroy us all. Robots are too dangerous to deserve rights.
youtube
AI Moral Status
2017-05-22T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UghFzdPE96-vgXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh85duhMW553XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghuQ76Mtq7bmXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UggbfSnvIR1GcXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgiMadlbSIBUj3gCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgippfQcZ5eF2XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ughq7T7pcmvSuHgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UggYnPBXwk_QsHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ughho5exH_I7x3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggduMjarUQUYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]