Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someday, we'll have more and more people realizing Money is just 'one way' for h…
ytc_UgxjnYBe-…
G
The scary part of all of these is that those people who control the world, who a…
ytc_Ugx886xHC…
G
@mayalangelet6827 What would prevent the programmers from programming the AI to …
ytr_UgzCXPsVF…
G
I’m pissed, because I USE THOSE WORDS IN MY DAY TO DAY I LIKE FANCY WORDS
But n…
ytc_Ugzdy5Y0L…
G
Isn't it a bit of "a monster from the id" (classic Sci-Fi "Forbidden Planet")? I…
ytc_UgyuiQUgr…
G
I would like to see the contract between Open AI and Microsoft and just what Mic…
ytc_UgxzFpS_y…
G
Management HAS to be wrong. There’s no way 1/3 managers will be AI. Ai kinda def…
ytr_Ugy12BOUx…
G
Imagine a solar x flare just taking out all electronics and grids. There goes al…
ytc_UgxpKo0h9…
Comment
I think humanity's greatest existential risk will be the time between the development of artificial general intelligence, that's still controlled by humans, and the development of sophisticated autonomous weapons up until the birth of artificial superintelligence.
If we are able to survive to that point, I think the superintelligence would likely logically choose to help us survive.
I really don't see a reason as to why it wouldn't.
youtube
AI Governance
2025-06-17T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxqNc2i5-uKafsJ9-N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtN-IjtBBpVv3Ugdl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYbRWOFmbNh4QNTRB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTpecjewLSL1AAKGF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQJu5vk3tslmruuxd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpi_qpkAmDy58YyUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCMSUvHIy_DloGWQ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyoeERaLSu-2gEwQjd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzAwVb-RmeMQLobX254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWM39ZgCVQSIPG2Sh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]