Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They don't try to give them realistic faces they try to give them dolls or super…
ytc_UgxYw4TdD…
G
problems with traditional art: none
problems with digital art: none
problems wit…
ytc_Ugx75T_ll…
G
haha, I think a strong AI would be what Google is developing "AI-driven voice" s…
ytc_UgygVg8HH…
G
For me, this is a bubble, a huge one. People are reliying on AI as if its someth…
ytc_Ugymp3_cw…
G
There's no valid argument against digital drawing coming from someone who create…
ytc_UgwanXoiN…
G
Trainer to robot : Give me back my gun.
Robot: No master , but I can give you bu…
ytc_Ugxg3OImr…
G
I'm not entirely sure about UBI, but I believe we should significantly expand th…
ytc_Ugy63r9Q-…
G
It's funny how everyone desires "miracles" and instantaneous material manifestio…
ytc_Ugxk8CuC4…
Comment
i still don't understand why people think the AI should reach superintendence in order to wipe out humanity or the planet if you will.
The atom bomb is not clever in any way on its own and can do that. The AI will not even have a chance to mature to that point as there is much higher probability that a slight failure in a system widely adopted live/autoreactive in to physical world will do unrecoverable damage to humans, planet, whatever that be. Its not intentional harm that will most likely happen but a "bug" which will destroy something on the physical side of the reality which is not recoverable as fast as the digital world is and it will simply not recover fast enough or at all.
No bad mind or mid at all needed, simple mistake applied in a scale and time-span we have never seen before.
youtube
2025-10-15T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyrv371Hu6eOs7YGJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfDXiU2R6dbVPqbLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5w-EsmmTQea4yaZt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyJeDIiGCTk_xP3xRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxoNhkeL6MlMsBHA814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxayCbSK2GpVCbV0T14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHmA602z2DvJaZT8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwxqD-jpeSdARHbOrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQMAuuzU8-ZfDBFbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzscHGwG1h4ROH_2iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]