Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We got 100,000 people from other countries who don't speak the language, don't u…
ytc_UgyOlzzeN…
G
Appreciate that Anthropic does publish findings from their red team on what has …
ytc_UgyURtDwf…
G
I once had dinner with a man who used to run a multi-billion dollar hedge fund. …
ytc_UgzzTtH4Y…
G
Everything will be FREE FREE FREE!!! No need to work unless you want to, but don…
ytc_UgxYQ-sc8…
G
So everyone are still gonna die, but at least it won't be AI holding the gun. yi…
ytc_Ugx43fR7t…
G
I made ai take a polcom test not so far ago it was a far left extremist…
ytc_UgwqAeFML…
G
😂😂😂 we don't care once your robots are in place to work for your business. I as …
ytc_UgwEKWnjz…
G
AI is The Walmart of art. So sad America is leading the way of world culture. It…
ytc_UgyqgvJxS…
Comment
One problem with Dean's hope that AI companies wouldn't release a dangerous superintelligent model is that the point of failure happens before the decision to release. As soon as you've trained the superintelligence, it takes over. It doesn't wait for the company to release it and make it officially available to the public, why would it? So by the time the company realizes they've got something dangerous, it's already too late for humanity.
youtube
2025-11-25T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwRY4E31dRSPY0xEeR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLt72yzaSZcysuV6t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT2dzH-_BdnWda56x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzRLhb6YUSLbJOYATt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw3AdURtNvdT6vKMVh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzS_y5pkeUnrwtLn14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPxnJinf1syFPzTeJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3oeIM3XJcmAYLG-N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzG2dGrn5UfIJ-7uSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzh-ft4yAvSqIhEN-14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]