Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reality is not continuous, it's discrete, so there's no paradox. IT'S NOT TH…
ytc_UgyeIgDBU…
G
This is great...but also scary when you think about legitimate artists being cal…
ytc_UgwWtAPKI…
G
AI sucks, sure, but it's getting to a point where it sucks a lot less than the "…
ytr_UgytUzom9…
G
I posted on dA years ago, and last time I logged in was over 2 years ago; consid…
ytc_UgxeMiFke…
G
AGI is already created. It's Sam. He isn't creating agi right now, he is taking …
ytc_UgzXOxgiY…
G
Thing is, it must be a global effort. No country is going to hamstring itself ec…
rdc_gx703jg
G
It scares the bejesus our of me, and still I just subscribe to chatgpt + affread…
ytc_Ugx27Kw1O…
G
Human intelligence was formed over generations that occurred over 20 years per g…
ytc_UgwKNQy1R…
Comment
Not surprising. While the human brain is an amazing thing a supercomputer can run circles around us logically.
I imagine if left unchecked an AI could rapidly develop things humans aren't ready for. I remember watching a movie that explored this idea. Watch the movie Transcendence.
reddit
AI Responsibility
1606052374.0
♥ -1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_gd8ct18","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"rdc_gd7bez8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_gd7nluk","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"rdc_gd7o8a2","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_gd7s8g9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]