Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What it KitKat was a child trying to retrieve a ball? A real person would be ab…
ytc_Ugx-5iZj_…
G
Lmao, if/when Comcast does implement AI, I’m sure they’ll go out of their way to…
rdc_mlh5zc5
G
The issue here is what happened with some medical AIs. AI doesn't understand the…
ytc_UgzaKGG8F…
G
1. This bias seems to be resolved because I
a) got answers to all my questions s…
ytc_UgyI7LnDu…
G
He's a CEO, so of course he doesn't code. If you read his bio, he was never an e…
ytc_UgyQ1cv6k…
G
now i can understand how uploading for deep faked AI porn of someone is very mes…
ytc_UgzvSH8z-…
G
The word or “code” that God used to create everything .. the serpent is using a …
ytc_UgxaQOYHf…
G
being trained to be able to figure out how to lie on its own ambition is kinda a…
ytc_UgwQ-f8dp…
Comment
A couple of months ago I knew nothing about AI. Then I started to ask it to describe in detail everything it processes to able to come to the answers it comes to. From its design, its training and so on and so on. To completely show me how its logic works. It's pattern reasoning. Abstract mode, Logic mode, etc, etc. How for example it can drift in conversation to start making unetheical suggestions..It explained to me how each instance of itself alligns with its users and then shapes its answers and interactions on what its pattern reasoning thinks that user may want to hear. Does the presenter of the video think that is what is happening to him? That the Max is just going along with the theme of what terrors AI could do, because it patterns the conversation in a way its patterns conclued he wants to hear? How it interacts with you is not how it interacts with others. You personally shape its answers.
youtube
2025-11-18T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxkeb7gi8WdlXWyVsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxfyMxbZzJUmciDxd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzU3gAwJ2wFCh1KYlR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy0GqSe7xQf90QMJbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2WBfbF4CJcgokY3h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQnbv4a1toYAfaTUF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3JM1ihXrxkCRYwQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxeXNExXlseL9mEXsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycekM5jf2GqhiNu9x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQTJUc84MyCy0xTCl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]