Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This feels way too one-sided. There are issues in both sides. How do you interpr…
ytc_Ugx6WU6bM…
G
Can you please share information on AI cold calling voice agent creating and rec…
ytc_Ugx6Q6rMG…
G
If they can't make AI indistinguishable from real videos, they'll make real vide…
ytc_Ugz4WiE_6…
G
is this green screen or edited like a movie, or this is real robot ???…
ytc_UgzENAW0m…
G
I’ve spent 12 hrs on a single drawing, for someone to say I don’t have a right t…
ytc_UgwSGNXof…
G
10:28 the AI gets mad at the repetitive questions and over pronounces the “x” in…
ytc_UgwnBce7l…
G
😂😂😂 The danger of a chatbot backed by degenerate intel companies and patsy occul…
ytc_UgxhP9xiR…
G
Yep, all the really top talented people are in anthropic, google or openai...tho…
rdc_mz2g0mu
Comment
My theory on the 9.9 < 9.11 situation is the training data for an LLM is largely textual and structured. When you think about text books and structured documents, the begining or first section is the most important.
reddit
AI Moral Status
1750971588.0
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzy115o","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n00db2h","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_mzz8xrp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_mzxzd9y","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"rdc_mzyafjd","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]