Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The biggest problem with the rising AI efficiency inconvenience is the decrease …
ytc_UgylSvae9…
G
I ww8sh you guys would stop interrupting the AI prof and just let him finish his…
ytc_Ugwf3NmnE…
G
I think this robot(human) is amazing, but I think their is a negative effect of …
ytc_Ugy5SaaNY…
G
Ai delivers your point of a view and helps you craft it :) Sora AI comedy reels,…
ytc_UgxUnBmDF…
G
"no one has seen this guy's chat logs". You have to be joking! You think OpenAI'…
ytc_Ugy8ZQ45S…
G
The issue is it frequently generates bad inaccurate reference where a real photo…
ytr_Ugym7ZVgg…
G
my take on this is: i disagree on the framing that "it doesn't feel emotions" be…
ytc_Ugx1B5Fud…
G
Hank this guy is a total grifter being paid by the same people who fund the AI c…
ytc_UgziBJhev…
Comment
The biggest problem is money. There is just too much incentive to forge on ahead because, if you don't, your opposition will. Also, any government safeguards will be way too far behind to be effective. Another problem may be that the AI will create a situation where they have already taken over and we don't have the mental capacity to realise it. In a way, this may already have happened.
youtube
AI Governance
2023-05-17T08:0…
♥ 118
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAQKeAnu26FoS6vm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwd0j2ilQBxfxE2RK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOjcxJvNYUXZJV8mt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2E7dKiEG3VeuIWCJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhWKjKGpq_95E-TId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPlaoiq5YNXQwT7Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzFlFLZ79UgiRT3XiB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxbYfVJTAqAmr1h-GR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKahfy8nqRX1WezYB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxmebnUe4_8svxRMkR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]