Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'll be the one to get emotionally attached to a pet robot and cry if it stops w…
ytc_UgyQLeUXl…
G
There's a fundamental problem with robots and AI, which is, although there is an…
ytc_UgylCHRLy…
G
If I have to draw like an insane person, then I will for the sake of stopping A.…
ytc_Ugz_tO7JA…
G
I'm sorry to hear that you don't like the name Sophia. Names can hold different …
ytr_UgytvIL7H…
G
@jonathan0berg I don't agree training done on publicly shared art is fair use, …
ytr_Ugwhj4exX…
G
HOLLYWOOD MOVIE... Here's the thing about assigning risk estimates, you can say …
ytc_UgyUyZ5dL…
G
And not even just that, they're studying LLMs. These are not bots whose purpose …
rdc_kp09gjk
G
We need some way to force a symbiotic dependency before the super ai gets here. …
ytc_Ugw30KBJF…
Comment
I read some of those transcripts and I have no idea why anybody would believe that AI had consciousness let alone anybody with any degree of programming knowledge.
reddit
AI Moral Status
1655294299.0
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_icij83k","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_iciv8wy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_icjh65v","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_icg0xck","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_icgpx67","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"})