Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi there! In the video, the presenter asked the robot about the meaning of its n…
ytr_UgxTQN87O…
G
Oh people - they’re already all over the AI to replace jobs. They are about 100 …
ytc_UgyILk7fA…
G
That's my thought as well - is there a credible European alternative yet? I'll h…
rdc_n5ak3wi
G
AI IS MORE DANGEROUS THAN YOU THINK,,AND ITS JUST BEGGINNING,,,WHAT WAS HE BRING…
ytc_UgzYME09p…
G
when everyone says "they" do this "they" do that, u ppl know that "they" are not…
ytc_UgzZ9NLYU…
G
The AI is going to replace doctors, and secretly plant shit in our eyes, and act…
ytc_UgzWJPvHN…
G
I ran into this problem today sharing an AI generated pic, saying that "I did ma…
ytc_UgwVe67zY…
G
June 2025 and still pretty similar..
And another question: why does chatgpt cap…
ytc_UgwQNZXL0…
Comment
Interestingly "text" is one of my personal criterion for AGI, because the way AI "thinks" is in patterns and math, not necessarily social structures. If it can do it perfectly and without hallucinating (like thinking there's 2 R's in strawberry while putting it on a picture/video), then that would be a major leap forward (to me)
reddit
AI Moral Status
1747826547.0
♥ 19
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mtgdu6k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_mtgpu0j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"rdc_mth4ltn","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mtgybqj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mtgs7yl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]