Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LoL you used AI to make a monstrous image of AI while talking about the risk of …
ytc_Ugxb5LlV-…
G
Tesla's FSD does not have a better accident rate than human beings. Waymo, sure,…
ytc_UgwH8mLj4…
G
I posed the question earlier, who's to say the Ai won't turn on/compete with ea…
ytr_Ugw9i9K2c…
G
There's a place for all kinds of tech, and I have been strongly encouraging my t…
ytc_UgybQfb9E…
G
I feel uncomfortable when people call images created by AI “art.” Look up the de…
ytr_UgwcH82ez…
G
Every single person has so much spiritual knowledge inside them, but; THEY ARE …
ytc_UgxOEXtmZ…
G
I don’t know what the problem is, Gemini probably just took training footage fro…
ytc_Ugy-ql3fW…
G
As long as we don't have Autonomous Cars, we won't have autonomous weapons eithe…
ytc_UgxbTFeIu…
Comment
The short answer is that AGI isn't dangerous -- that's just marketing speak to get people like Trump to hand you billions of dollars, and it worked. These companies are jockeying to become "the good stewards of dangerous tech" and, of course, "bad things will happen if China gets it first". It's just marketing. The real problem with AGI is that it is ridiculously resource intensive to develop, and it serves no purpose other than to TRY to prove that humanity can be duplicated by a machine. Specialized and focused AI models are extremely useful -- but a model that can do everything a human can is counterproductive.
youtube
2026-04-20T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugzdj0UsbwbcZ8YPDuh4AaABAg.AVxWgMUN6rnAW1P3ucO2Tk","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwQpoVWqDlLGdJaM014AaABAg.AVw_taxJ2peAW1hBgThcvT","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugy3_Ts_AomkeVKwmFV4AaABAg.AVvdJP0XkHxAW1QmP1DxDE","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugx3JN9CMSwtkQHF5Bp4AaABAg.AVvcWW2pbnCAW1QgxLsY2i","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzEbNrgZRZcdcoKH0d4AaABAg.AVso_MjTwnAAVspB-SiKpe","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyuwkF575Em8bpYUlN4AaABAg.AVsBOLfT41aAW1SoEvKD-v","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyuwkF575Em8bpYUlN4AaABAg.AVsBOLfT41aAW3Cd40D__j","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugx_0r9ymZ0O6rojw9p4AaABAg.AVroRw9u9MsAW1TXzshpWG","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgykJUqFUeKf9QjEM914AaABAg.AVmnicY9kovAVotcsM_Ybt","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugw4MUa6Xbq0Imr9Xqh4AaABAg.AVmYXFGohFEAVousxmkp9l","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]