Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean i fully support this but it doesn’t really stop companies from just takin…
ytc_Ugx2qDaDe…
G
Finally someone speaking the truth about AI. This is the worst thing people have…
ytc_Ugy93KaMu…
G
Automation is not a problem, it’s the fact that it didn’t and doesn’t come about…
ytc_Ugw07mhYG…
G
dude want me to show you my crappy old drawings, the only ablity artist have ove…
ytc_UgziYLE4B…
G
To be completely fair though that guys just an idiot LOL
But yeah AI will still…
rdc_oadnt27
G
The top dog of A.I says...
"We dont understamd how they work"
This is madness…
ytc_UgxiQSPuz…
G
AI should only exist to end capitalism and poverty simultaneously. The point sho…
ytc_Ugw6kr4Qb…
G
Yeah I get that... Ai can HAS to be used with human interaction to accomplish t…
ytr_UgxpJ446H…
Comment
I'm appalled by Ezra's denial of the threat and wishful thinking. It will only take one mistake or miscalculation for a catastrophe.
Furthermore, AI is not "programmed" like traditional software. It is being taught how to learn on its own, from human-derived information and data. All humans do is feed more information letting it know if what it derived was "good" or "incorrect". These aren't commands and it is not like giving a dog a treat.
What Eliezer is pointing out is that the result of the learning process is wildly different (and in some cases threatening) from what we hoped for. This is not an industrial machine or tool. We are creating is a digital life form.
youtube
AI Governance
2025-10-21T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyMkp_eHRRL0dDh44p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMyh32CmdkyYd4T9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyMjkxAfxT6e0NDpF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8PDrcKhxHPFH7wHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzFIM1AWbBki8LQo2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxL5Ytx-_8kID63QbV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyKupnQIyCgQHUPUf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLUsO4qF1V423BTcx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwlvxYcaKeuSqtTALl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwREmL4EJrbZp6hs314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]