Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
10 years ago I wrote a paper in university concerning the ethical implications a…
ytc_UgwfuReyx…
G
Cool thing about the AI bros raging in your comments section:
They have to watch…
ytc_UgyaaHe26…
G
Robot tire: goes to the US
Random US citizen: "I use GUNS" (like it's pokémon)…
ytc_UgxxscA6t…
G
I roll play as a person who doesn’t even really like talking to real people. Ai …
ytc_UgxKvTP1v…
G
I just love it how these jerks got onto it only after they heard it can mess up …
ytc_UgzYdjGqE…
G
"Never buy on hype or charisma" Terence McKenna. Turns out he was right on this …
ytc_Ugx3EaoC9…
G
@MCNarrethonestly so far the ai is repeating the same technology that gave birt…
ytr_Ugwo_BpW2…
G
I encounter terrible drivers every day. At least with a driverless car, the tech…
ytc_UgzqYC6vZ…
Comment
>Once the brain is reversed engineered and replicated
I think *if* is the word you're looking for. AI is already gobbling up way too much power and it's nowhere near brain level.
reddit
AI Governance
1705569052.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_kigarem","responsibility":"none","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_kiijwr3","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_kievaqt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kifb8vg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_kig8u8t","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]