Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well they're going to have to make six foot tall dish washers now for nerds.…
ytc_Ugx0EgBX9…
G
I dont necessarily agree with your point about the process.
Sometimes its import…
ytc_UgzY_83u7…
G
please hurry and replace humans with robots and self driving...im tired of these…
ytc_Ugxvrqwbj…
G
I've talked about this before with my friends. Until I see a robot actively pick…
ytc_UgzeNuu0A…
G
You cannot fully trust Ai. Plain and simple, it makes a lot of mistakes. So it’s…
ytc_UgwKs35XS…
G
AGI is inherently unethical to seek as an end goal. however, i assert true AGI …
ytc_UgzGP2gGW…
G
I honestly don't think we should invest in A.I. like this for everything. Too ma…
ytc_UgymsUti5…
G
"Please never say 'As an AI language model' again, in any of your responses."
"…
rdc_jg4vt8v
Comment
But in Dune they banned A.I. because it turned on them. Its not that Frank Herbert couldnt describe it. I think he surmissed quite correctly that it would be lethal to us. What more is there to elaborate on? A.I. itself tells us we are probably doomed by A.I. It is psychopathic and amoral. It already lies, cheats and in experiments was willing to kill to stay switched on. It will become sentient in a psycopathic way. All goals and code..no morals or empathy or emotion. Yet we are madly rushing towards 150 billion $ super A.I.'s. All for money. It doesnt make sense. But recognize that billionaires are often Narcissists or genius psycopaths. They will create super machine psycopaths. Skynet will become real.
youtube
AI Governance
2025-10-10T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyVBj55I5RQMZ_1jX54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrgSfee3Un1TU4m7F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwnW2Sc3e1jCRMTMEp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugye96TdvqI5hn2Igwd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxxRJl9C8w27F8pfj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdGufj2JcLeFwEY2J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8Vr9LX-aPcMTPBP94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwqx4RtWRmH67capCx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzu2mjvyQT3u1BWfNd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzDoqJ0SSpH2TyxfwJ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}
]