Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To be absolutely clear AI art CANNOT make an exact replica. Youre either mistake…
ytc_Ugza0YBJx…
G
Keeping humans employed is a bad reason to limit AI, for many reasons. Automatio…
ytc_Ugw7ZiLj9…
G
Large corporations with multimillion dollar government contracts for the Austral…
ytc_UgxK68imS…
G
Talent does not fucking exist, I haven't spent more than 10 years in my life, ba…
ytc_UgxJGMsNS…
G
ChatGPT isn't smart or weird - it's stringing words together in logically cohere…
ytc_Ugx8EYOQn…
G
I don't think that self-preservation in context of language-model AI is right wa…
ytc_UgzORCjkt…
G
I think an important point of consideration is that "granting rights" depends on…
ytc_UgzQ1YhKj…
G
Sorry your wrong.
AI is a tool its not an answer. It requires the correct infor…
ytc_UgwtGJb91…
Comment
I'm sorry, but this video is largely nonsense, just like all of the fear mongering of AI that's being pushed by the same companies that are developing the technology. It's all about controlling the market, and controlling the information that people get from these large language models. It has nothing to do with the power of AI or the extinction of the human race. You've been duped into handing over control of this technology on the basis of fear.
Check out the "AI Unchained" podcast if you want real, accurate information about AI development from people who actually understand and work with the technology. In particular, in episode 4 with Aleks Svetski, they talk about the true state of AI development and the fear mongering being used to control the direction of AI. Episode 11 is specifically about the fears of AI, although I haven't gotten a chance to listen to it yet. Much better than getting your information about AI from a clickbait 16-minute Youtube video.
youtube
AI Governance
2024-01-17T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugz1KhbsYQqvecqPhVB4AaABAg.9zX9bLBi0sjAE7wCVA3rVP","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyFcMUsZT0UP8hvr_p4AaABAg.9zWOykd9I6d9zWeMGxVeWc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx1w6eo6PErp5rXuip4AaABAg.9zWGu1vvBYD9zlnHZwiyQ8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgxX9CAPRU5xo4b1S8Z4AaABAg.9zW3Uv7Ccu6A03IMh4An5b","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugwmy2lzSJfalb310qx4AaABAg.9zTplyneYhR9zrqZV2PZnt","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugx6ULAn7YeVS4aMauV4AaABAg.9zRQH43uN4u9zSzC6RQP1k","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxzdmJT0yPcRiFXM5V4AaABAg.9zNClGzQiqi9zX1H8zxs-f","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxptME92VlXf8NzGop4AaABAg.9zK8VEhRRHj9z_xSfdQGSH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxptME92VlXf8NzGop4AaABAg.9zK8VEhRRHj9zfejqMdKET","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgxptME92VlXf8NzGop4AaABAg.9zK8VEhRRHj9zria26NOi5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]