Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's actually way easier to create such things than people think, anybody with a…
ytc_Ugy5WWeqf…
G
I totally get where you're coming from! The idea of AI like Sophia discussing wi…
ytr_UgxDwijHG…
G
Commercial Journeyman Plumber here, we aren’t as far behind other fields as one …
rdc_j42ny2c
G
And midjourney will get copyright by all the artists they stolen form to train t…
ytr_UgyDxM_8c…
G
AI’s greatest risk has already been realized. It dehumanized our societies. In w…
ytc_Ugxaw5XP3…
G
the horror of AI is coming, the biggest pollution of nature to build a slave soc…
ytc_UgzNwqRbA…
G
next step in 50 years: stop making kinders, build AI Humanoids in laboratories a…
ytc_UgyRqdCku…
G
Singularity supposed to happen in 2029. Where the male robot says he will make h…
ytr_UgyGx9aY3…
Comment
As long AI is use for the benefit of humanity that will create merits of it. If those evil humans who develop master plan who made AI do bad things to humanity and after that they will bear all the consequences of Karma even they are not around anymore but it was auto pilot till the next generation or more and so is the Wheel Of Karma is created for destruction by that very human will also continue to bear the negative consequences that he created.
youtube
AI Governance
2025-10-15T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzU9ZyB_5I47ZeP9rV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz03-sHj1IGtIbgl8l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwupokL-dLY7CPFPPx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwXn1GExJCqciweKct4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxuczIPHB4_qcwa6Ax4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx1nSmp0efNMrkMz4J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzzMYwBSBOBZ99iULJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz0FDksuWxyXI7a51t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwj8q4IaFUEd_-5GB54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzZ7eTfPr8r6CJknx14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]