Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I called amazon...one of the biggest tech firms in the world. The voice recogni…
ytc_UgwmRXer0…
G
*I'm hopeful that we can shame AI into obscurity the way we did with NFT's.*
Con…
ytr_UgyBi7a2b…
G
Why are we already using ai. Especially in health problems because he have littl…
ytc_UgySJzj8L…
G
The competition to build the smartest AI is def risking life for everyone....the…
ytc_Ugw_ZwBbM…
G
Any AI smart enough to pass a Turing test will also be smart enough to know to f…
ytc_Ugw8YP817…
G
We should consider a universal basic income and reconsider copyright laws. The m…
ytc_Ugzd6hGz5…
G
Earlier this year an ai landing system was developed on the airbus a330 and ai w…
ytc_Ugzy-J9hR…
G
Tesla's FSD is SOOO far ahead of any other self driving car. This video has no m…
ytc_Ugx38_EDg…
Comment
This is the shit you read on your average pro-suicide space online. There is absolutely nothing new or exceptional about this kind of sentiment, that's exactly *why* the LLM predicts this is an appropriate response, because it's something that predates it.
reddit
AI Governance
1762503339.0
♥ 259
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nnk2hjq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_nnllc2r","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_nnjrep9","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_nnkbs5t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nnlfzn8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]