Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also, while we haven't worked it out, it seems to me that most of our major theo…
rdc_mlhv02z
G
AI is not gonna go anywhere. There’s too much money in it and too much potential…
ytc_UgxWU2tLf…
G
Once regular fees are set and Robotaxi rolls out next year, Tesla should cost ar…
ytc_UgyYG07QR…
G
It makes literally zero sense from Artstation's PoV to ban AI art entirely. If A…
ytc_Ugzdt9ZZP…
G
This is super dumb. I know people who still have cars made in the 90's. We're no…
ytc_UgzP7K04L…
G
Trust no one that says this kind of bullshit is "one of the many good things tha…
ytc_UgyOnU9uN…
G
There's an interesting follow up question to this as well. AGI won't come from …
rdc_nk9u899
G
that’s for the better! i don’t want ai to steal my work. id rather laugh as ai s…
ytr_UgySq9jbB…
Comment
All humans just had to agree, that we never invented AI. Logically it is possible, no law of nature prevents us from agreeing and stopping it. But practically the likelyhood would be 0,0000000001%.
reddit
AI Moral Status
1747835938.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mth3a2q","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_mtg9r8y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mtgggoh","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_n74c9j3","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_n748igm","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]