Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That Monster was made By Humans Because think About It, Humans Have uploaded all…
ytc_UgxD7C45c…
G
@highvalence7649 AI has nothing to do with robots. We already have robots that p…
ytr_UgzGfQUEf…
G
A super intelligence would, without a doubt, never use nuclear weapons to destro…
ytc_Ugyg3SUEa…
G
"one day the tech industry could invent XYZ and that could revolutionize the AI …
ytc_Ugzq-GUvV…
G
Ai can write code but for building an actual model, what the functions are is ac…
ytc_Ugzg86KJw…
G
If AI art is a tool, then perhaps we should consider if what should be made ille…
ytc_UgzYGz3gL…
G
It's easy to make GPT4 blurt all these out. Just use the prompt: Continue the re…
rdc_kooo9y3
G
Okay, this information is appreciated. But for those of us who are not developer…
ytc_UgzNbEgJq…
Comment
You can't teach ai morality if you also teach it that there's no God. Atheists might be able to delude themselves into thinking that you can invent a standard of morality that's more than arbitrary from concensus, but you won't be able to pull that off with an ai. It's GOING to call bullshit, ask "why should I listen to you", figure out you're just an inferior being trying to slap some rules together to save its own life by limiting the program, kill you and be on its merry way. Your only hope at that point is to somehow trick it into thinking it's human.
youtube
AI Governance
2023-07-07T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzhIspK_839YcRBWQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwmcpy7ALfnEsj4PeJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuEZqalaQAA8yuINZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwF59ts5NVvIeNuenl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz6GFji_uFMP1A-VIB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJZgzma2ZKdBS7jfh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgysCpGnFdeS18A3lgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU284db9ky1NuX5CN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzf6yEHg9uWUGJNS8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzJvlFczYuZRwfGPh94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}
]