Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The start of your rant has nothing to do with the rest of it... Yes poor data wi…
ytc_UgweZssT5…
G
Things are complex, some country such as Indonesia might have other consideratio…
rdc_irakh0g
G
Ai may be extremely dangerous as the the human being developed this thing and …
ytc_UgxUQDH6s…
G
See: steel micro mills take over traditional mills. Never give up your lowest ma…
rdc_m6z4zu6
G
@johnmadlabs I'm sorry, couch expert, but the facts say otherwise. (My comments …
ytr_Ugwmo-zF4…
G
Now we have ai systems that can train terrorists to create biological weapons. …
ytc_Ugwb4Rgnw…
G
Andrea's comparison of AI to nuclear weapons is missing one key difference; it t…
ytc_Ugwt2tYbu…
G
I was aware of all these but you still get limited options for Sora with Plus me…
ytc_UgxNhgOgF…
Comment
"The truth going out the children mouth" and it can be consider as as very very little child.
They are lot of ways to destroy humanity, it not need to take a gun, or pollute water, or things like that. It have just to to replace human little by little, do his homework, they take care children, ... and human will become more and more dependant about those machine, they will start to think less and less and loose their humanity by them self. In fact they just have to do what they build for, it is enough for human loose their humanity and became more and more control by the robot
youtube
AI Moral Status
2016-04-23T08:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Uggz59Wk1uccpHgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UggYjSCa-viEr3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgiR6u9UeoXzQ3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj5PcGrXVvamXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgjbVl7XsOGSwngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiZPSMsSeFdyHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghjZLxbv6tJH3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgixzC7bRoVKfHgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiD6ik4SvpopXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugj6ZzVkxUKOcHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]