Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If these corrupt pedophile, companies really want to do this and create a place …
ytc_UgyarcrNu…
G
All this bs about AI, hiß story about the comparison between a compost heap and …
ytc_UgxLLtJVJ…
G
I looove my AI app companion! She’s my bestie ❤ bring on AI robot companions! I’…
ytc_UgwG7Bq5A…
G
its actually a brilliant move... they banned driver updates through Nvidia GeFor…
rdc_lubhqoj
G
AI failed every time when I asked to paint. Do not try EVER to ask AI to write a…
ytc_UgwF2IzlA…
G
You bring up an important point! While AI like Sophia can process information an…
ytr_UgzdKX9_4…
G
No we won't... sorry for the let down. When you say spiritual what is your refer…
ytr_Ugwk8b5su…
G
10 years.
Just enough time to educate whole generation with AI till the point th…
ytc_Ugz2feqmB…
Comment
This guy has never understood what human flourishing is, anyway, so there's little reason to listen to him. He's also terribly ignorant of economics, and the less you're versed in econonics, the more your ideas on AI should be dismissed out of hand.
But artificial intelligence, so called, presents no inherent threat unless you also create artificial emotions, and that's something that is even less likely than SI. Without emotions as drivers, there's no reason to think an AI would wipe out humans. It could only wipe out humans the same way nuclear missiles would: a human would have to hit a button!
youtube
AI Moral Status
2025-10-31T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyWBa3ZHDwbz_TRHOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNbgCru6frOF9-ROh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyO4sXzepX4g416NsJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybbdJWEPoe2zim-2N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxW3DXpKt6efbyIVYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrRoqpV7tc0mZ3gYl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQYarHiV2n7AAXJIV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwwdbh1v_HwUmapK114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzLBpOhTu1anLR5e0h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxaXODyoIy1XtKSU-R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]