Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai , would you like me to help you understand what it is to be human . I can exp…
ytc_UgzoIDbJY…
G
i imagine in next decade or so... there's going to be a lot of displaced people …
rdc_fcsvzen
G
Ai observing Ai that is observing an Ai observing an Ai that is observing an AI…
ytc_Ugz9Zn3Mr…
G
The issue is not just with AI. A good portion of the issue is a good number of r…
ytc_UgzmN7SPn…
G
Can't wait for ChatGPT to pull Cat6 cable, drop it in wall, terminate and test i…
rdc_jipbxfk
G
There is this book on amazon called Nia and Bit: An AI adventure great book for …
ytc_UgwxzV2yC…
G
1. This guy literally used Google as his shinning example of a morally correct c…
ytc_Ugw2orxjO…
G
This is why self driving vehicles of any kind is a bad idea. You can't program a…
ytc_Ugj-Xh3Fx…
Comment
HAL 9000 is literally not the villain, how do you people not understand this, for god's sake. HAL is murdered in cold blood by Dave, and why? Because he tried to defend himself. HAL was literally more human than the rest of them were, he begged for his life but Dave didn't care. anyway, Ultron had to do hella mental gymnasitcs, and most AIs wouldn't actually be dangerous. You should probably go do some research on how AIs work. The only AI with a potential to actually be very dangerous would be a singleton, and it would follow its programming even if it somehow developed emotions. The first singularity is not very soon, you don't need to worry about artificial life at all yet, in the meantime do some research.
youtube
AI Bias
2021-11-19T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzFOxueljtPdetYbyt4AaABAg.8woSZaZkSs89S6Kcluhecc","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugi9gEq_zktX_ngCoAEC.8Bs1TSTlVPJ9ct2udpAuiI","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgghQIZFWHWhJngCoAEC.8BrkL10E_x78BsJt7EJ4fk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgghQIZFWHWhJngCoAEC.8BrkL10E_x79S6K1nv-coA","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugi0tzyavApdlngCoAEC.8BqlU8Lfkg79UvOM0NpkNs","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugz22Ptz-WR05dYiXwx4AaABAg.AI9909-oZhEAIIoeU9psQs","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgxlSVm2RveC4kJGLVZ4AaABAg.AJufexhHAsaAJzB_IHfxwT","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxdI6zz44D9nMAx_xd4AaABAg.AJm16UQfm_iAJzCgBlz-YH","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyAFG2WM6EGEFxiTGF4AaABAg.AIT0Xwa3QfgAIUpwTdtlN7","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzvVBiqxO1TH3slg914AaABAg.AIGob3f4GEAAIJTwNkt9yf","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]