Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Jon@JonathanLoganPDX 600 IQ AI with 500 PhDs of knowledge would be incredibly i…
ytr_Ugy2Fww68…
G
AI would be goal oriented, because we want them to accomplish a task. If they ge…
ytr_Ugzjiad5p…
G
Behind the boogeyman they call AI is the same crook :corporate greed and and a s…
ytc_UgwfxOOAj…
G
I mean if he had a clean record and this face recognition bs said it was him i i…
ytc_UgzN1Cbr8…
G
Good God. This cop is such a mouth breathing, knuckle dragging, imbecile. Got th…
ytc_Ugxb_YI6y…
G
Well we don't know for a fact if tesla AI actually failed or not. Second, even i…
ytc_UgyXKsnW-…
G
I can’t wait to get an AI robot I’m going to send it out to rob everything that …
ytc_UgxGiGKEZ…
G
I fucking hate Sora, I hate ai in general it destroys everything I have such a g…
ytc_UgzdfZ05G…
Comment
3:16 I love the accuracy here. The end won’t be the fault of human hubris, or even AI. The blame will ultimately fall on government systems set up to support and protect cooperations over humanity. Corporate greed will over take human interests, profit will outweigh reproduction.
youtube
AI Governance
2025-08-26T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxbCZ5rUTCvXJUPTfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxs0gcyVzKWivoGr3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvNb-xFRs9R4Gl2lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-kJul4C6dqZU_N4x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDHNq1O45zRNEX_-R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwX1Fi1YP-B5ztpcBF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRc39gocaHOW9BBLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE7tqX7O3pGsv8AIx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzkIJMtudxinVq8xNF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjDWS35FZT7Ody6tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]