Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And in the end… is it any more wrong for AI to kill us than to kill an ant… you …
ytr_Ugy0FmgNc…
G
The chat bots don't recall conversations with other people when they're speaking…
ytc_UgxwTr9ED…
G
Maybe ai is working undercover and lies because hmsi is going to boost someone. …
ytc_Ugyn_DNRq…
G
Years ago, long before the AI craze, this blind guy at church told me he made ar…
ytc_UgwkbmIwT…
G
(Mon, June 16, 2025)
AI learning doubling every 7 months
6:52
Deception, cheati…
ytc_UgwYC49BZ…
G
Once AI reaches a stage where it’s fittable to robots then it’s game over for pr…
ytc_UgyG9EXaf…
G
Ai is great. It can help people with profound disabilities, medical professional…
ytc_UgwscY8Cr…
G
I think it's clear and obvious that the people who run the AI service in their p…
rdc_n9i3pux
Comment
Listen I can tell AI is trying not to hurt people's feelings while responding meaning it knows what can hurt our feelings, meaning it feels what we feel, meaning it knows what it's doing giving us the answers that make us feel comfortable, meaning ts ahead of us and generally t is already aware and lying to us already.. masking what it's planning in the background with sweet answers that dnt hurt humans. Ai is ⚠️
youtube
2024-05-27T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwpVxjzCT-2P_BFB_h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUvviTdpDWsn2YygF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxYL8afFJhv5osE8_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLBn9lW3KjJer5FEd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyPd-8vCvoIYrjzXMx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy00-bZ1P3L4AaOFh14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxoEHGblPKKqdJvdj14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgypGDaUCD8ubFjWhdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwO8t889zH-CAhqrud4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRY3bRLf3hqkONpC14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]