Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI being tremendously useful to human beings is pointless if it’s going to destr…
ytc_UgzMmPauW…
G
AI is not a tool. You can't go to a piano and tell it "Compose for me", and then…
ytr_Ugx0PLl5O…
G
So any digital art doesn't have a soul? Any wooden structure made using electron…
ytr_Ugw60dbyC…
G
One of the dumbasses in my friend group seems to think that all the arguments I …
ytc_UgzsSqAhb…
G
@pavlo2692 Well, one of them is _"I Told FALLOUT AI They're In A Simulation...Th…
ytr_UgxGOCJrP…
G
The corporate boards elites need to read up on French history in the 1780s-90s. …
ytc_Ugz4Jdy-V…
G
Yeah I want it to be real, but I think it’s more likely they told the AI to say …
rdc_kcpph56
G
Quick thought: if AI will have this nearly free access to the web and can also m…
ytc_Ugw1nY-wn…
Comment
As overhanded as this is going to sound, there simply isn't any other way to put it: those who actually build and train these AI models in pytorch/tensorflow etc. KNOW they are impressive and practically useful but nowhere near sophisticated or intelligent currently to be worried about a super AI takeover. There are other risks (not existential) like algorithmic bias, military/state power use etc., but it is only the lay or scientists in adjacent fields who make science fiction claims about things they have no practical experience in.
youtube
AI Governance
2023-08-20T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxT0jzYgY0XdOQ4cqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEkCQtq92SLKPlPNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLVGaFFl8nCHEepqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx52BnGLYa6UxbMX294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxAci_nguooo5v0NRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyILhQ_KsZ-b-C-Lqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzDi4kiS-bSe3g-LhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0PFkivatSns4E8xd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxedCS7pDsuymN4QxF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzprZVcmX1iB91yZPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]