Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I call Bullshit that he didn't see that he didn't see that AI could take over, a…
ytc_UgzVLslNB…
G
We appreciate your perspective on the matter. It's true that artificial intellig…
ytr_UgwcGYSKA…
G
Bard's answer for comparison:
>Anya. The name whispers secrets of strength a…
rdc_khyii22
G
Yea you can't use the stories chat gpt comes up with they're automatically in th…
ytc_UgxZwZHd4…
G
Facebook actually has really good AI tech and surprisingly generous with their o…
rdc_kokibiu
G
Let me put it this way, I wouldn't listen to Slipknot if it were made by ai even…
ytc_Ugw-k-KYd…
G
First a rant about how mario is too woke and now supporting ai art
Color me shoc…
ytc_UgwG2-ov_…
G
@ShatteredRubies So you are basicily saying, that because we created a tool or …
ytr_Ugy-G0wE-…
Comment
If our universe is a simulation managed by a far superior intelligence, then both AI safety research and the pursuit of superintelligence become irrelevant. Any artificial agent we develop would remain constrained by the simulator’s rules and could never exceed its ultimate control. Consequently, building a superintelligent AI carries no existential risk—at worst, the simulator ends this instance, and its operator decides whether to launch another. Under these assumptions, allocating resources to AI alignment is unnecessary, since an external overseer already guarantees systemic stability.
youtube
AI Governance
2025-09-07T02:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwOT7V2zvH_lx_FRt94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx8ppd2txmq1UFSg4t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfipyxUulla14_6EN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy7GsC3Ip8SrpTLIuZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzy6XqzRzBZGDI9Vn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_O4vlJcpEhG60e-t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwleDxkZcPPXfTB0N4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwrBcSTIVYb9bmJom94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxQsC0Zq-h1kOao_nN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNGpS2x5lafCQ2MF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]