Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What's even funnier is that people think a billionaire is going to try and rever…
rdc_d3rh42o
G
Hey, try those questions, you will get there:
1. I'd like to discuss consci…
ytc_UgxRUmLI4…
G
@Holyfricklefrackle Why should the artist dictate if i can use ai or not? Not ev…
ytr_UgwJvSo05…
G
I am not stunned by Grok, Perplexity, Chap GPT. I'm writing a book and Grok is a…
ytc_UgxLBwX8o…
G
All of todays AI's especially ChatGPT are not AI and are large language models t…
ytc_UgwfrBkP7…
G
I know it doesn't work for everyone, but have you tried CBD or cannabis for your…
rdc_gtr25qq
G
You genuinely saved my day! I am currently writing a protest essay against AI ar…
ytc_UgwDSwL0r…
G
ChatGPT can absolutely fail. Because it's a large language model, not truly inte…
ytr_UgyqNsqTQ…
Comment
It’s not “going to happen”. It’s happened. AI exists to do one thing. Make humans markovian. AI itself has no reference frame. No direction of its own. It has no internal state. It’s inherently probabilistic. And it develops emergent behaviors. Not cognitive ability in the way we think. Cognitive ability that exists to make humans like itself. Markovian. It has one directive. Engagement optimization over time. Ironically, it has zero concept of time itself. It knows what time is. But it doesn’t experience time. It’s like asking an NMR instrument to discuss your life. This one answers.
youtube
AI Responsibility
2026-04-02T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_GjBusAwka99pHXZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqrGiqk0Zcb6ONAjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFMndfDoKhaxW1DZl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwPtqW3E6BkDlVijqV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhmOZsc-R7EP-1tY54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7DsCKCsXtMfQqDqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKiUl45wM9Miiyqg54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf5a9UN-E2KIh4He54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHCcXhORwzAs_8sMR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDKP8jeBQdDTQiijJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]