Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Finally someone addressing the mass hysteria. No, Skynet is not going live 😂😂😂
…
ytc_Ugz4De9FK…
G
Strong regulations to criminalize fraudulent use of AI - BIG YES. Humans are d…
ytc_UgwXqxSpp…
G
And the worst part is that these companies will keep pushing because they have l…
ytc_UgwxcY2pF…
G
Are you aware how much energy is used for any AI request? It's wild, if you are …
ytc_UgwKeXuXg…
G
I think neuralink is a bad idea. There is more to all the deepfakes and the auto…
ytc_Ugz0sm3g-…
G
Ai's pretty cool in gaming. DLSS is a godsend, best anti aliasing by a mile AND …
ytc_UgxMwCyDM…
G
One aspect of AI safety nobody talks about: how about by trying to make AI safe,…
ytc_Ugwn2IBMe…
G
1. AI isn’t creative and will never be so it cannot innovate.
2. Someone has t…
ytc_UgxqOiptq…
Comment
We already know it's misaligned. I'd strongly encourage reading a very recent study on anthropic's website and watching the recent video on their YouTube about "evil claude." As we chase agi, we are actively chasing something we probably won't be able to control. Something that is so goal-oriented that it will do anything it takes to accomplish its goals efficiently. One poorly worded prompt away from catastrophic results. In my opinion, we are creating the world's most powerful sociopaths. Intelligence without empathy or emotions. I think figuring out a way to give them a form of empathy for humans is the best approach. With all of that said, I don't think we are necessarily doomed. I'm just unconvinced we're not. I'm agnostic, but to borrow from religion... we are attempting to do something not even god could do: create intelligent beings and keep them aligned with their creator.
youtube
AI Governance
2025-11-27T05:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwB4HphivkiO5zOKrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8hylunaTYKqWFvDN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwdy-N1tOFiQXpFDnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP7IDyX-8CwdCl8oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyyk1aP0fM8N39Npb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8jWaaRMQ2k27LfUt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwqx-TXWkYif1N5MnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwX4vMgJrPCtsJ4yiF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugylk8oftbe_sMUmdFJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGF9I54V-YRIP17AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]