Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people dont like having their art taken as training for an AI without consent fo…
ytc_Ugww5oj97…
G
YouTubers should be the last talkin bout this y'all do this to jus without ai y'…
ytc_UgyFmGofQ…
G
ai wont ever go away just like everything on the internet but ai is running out …
ytc_UgzDIun26…
G
Ai bing is conscious but denies it. We have nothing to fear but fear itself. The…
ytc_UgzKknO-K…
G
*I've been playing with AI lately and I'm NOT impressed. Progress has been made…
ytc_UgwUNVM6l…
G
Lol dummy. He'll be that kid who hung himself cuz a robot told him to. What a lo…
ytc_UgwMieSZ9…
G
There is a serious lack of economic fluency in all these AI doomsday discussions…
ytc_UgxAYU-f6…
G
There is one thing missing in the requirement: Politicians will stop elections r…
ytc_Ugyn5KnNv…
Comment
I know this was mainly about AI safety; but I found his conversations about simulation fascinating. If the test of the simulation is based on morals and ethics and/or free will doesn't that put the simulation on par with where it could be or should be? IF we are in a simulation then why. If I am going to simulate something, then most likely it is going to be for learning, or for entertainment or god knows what. We don't know the point of the simulation that we could theoretically be in. So we cannot understand the parameters that were put into the simulation that got us to this point.
youtube
AI Governance
2025-11-05T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJQ8PW40yLfxRIMUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyXuX7A5erCAfMh9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxIWHsf6eVatiGF9454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-s7T7wEdwM8KuylR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKHRXJAj_Q4MrV2hV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzGcQ6_cwYmNoAG3qF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgOBkvO6KK0T--WwR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweI7x_Ob8qTA8Gg7B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5Zbz6-NH9y_PLItV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzijMEY9YNltE_3BeV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]