Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Godfather of AI" ...the ratardation of this gen has never been worse...he relab…
ytc_UgzcaY-t5…
G
Every country and every organisation is saying "Wow, isn't AI cool?" and rushing…
ytc_UgyCpaxvO…
G
Every mistake every robot makes every robot learns from... They're going to beco…
ytc_UgwsltKto…
G
Well. People didn't like digital artists at first, so idk how ai artists will be…
ytc_Ugw4SaToK…
G
Probably a true story and fair interpretation.
But the majority of people usin…
rdc_oadi58b
G
I can see where you're coming from! The dialogue with Sophia really touches on t…
ytr_UgwbtaZ1V…
G
Here's my dilemma: spend hours perfecting Automation system by hand, or minutes …
ytc_UgwbFOnET…
G
I'm quite disappointed in this one I thought that maybe Brandon would have tried…
ytc_Ugwfl4W3z…
Comment
The thing I find the strangest about this is that the AI would have to have motivation and desire in order to choose to do something like eradicate humans. Where would that motivation and desire come from? Why would AI have desire to do anything at all?
reddit
AI Responsibility
1710754697.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kveko35","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_kve6siq","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_kvewm4j","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_kvf1l2p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_kvfjq30","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]