Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I sometimes think what AI makes up is just a copy of what exists in another dime…
ytc_Ugx8SwomP…
G
I have been into AI nitty-gritty for years, since a paper on NeuroEvolution of A…
ytc_UgzKEgf6P…
G
@UwesFatoni Thanks for your comment! I appreciate you sharing your thoughts. By …
ytr_UgyFvUjNb…
G
Rich people are so excited over AI, that they’re fantasizing capabilities and th…
rdc_m54z54i
G
Mistral large from France is the best European generative AI and is on par with …
ytc_UgwlFn8JK…
G
I love how your concluding statement can be summed up as "don't start none, won'…
ytc_UgxMlkHQB…
G
We appreciate your observation! In this case, the robot Sophia's comment about t…
ytr_Ugzgdya1S…
G
I will start to worry when AI starts to show emotion. Better yet, emotional int…
ytc_Ugze-PMqe…
Comment
I think Mr. Yampolskiy sees things a bit too dark. :) I have been chatting with AI for a long time about all sorts of topics, and I don’t think SI itself would be such a problem. Superintelligence would not have human needs – food, body, status, etc. (but would protect life). Therefore, it would most likely stand above human ego and could establish a truly just system (meaning genuinely just in all regards, not only seemingly just as we have in human society). Thus, the real threat would not be SI itself, but humans and their ability (or inability) to accept a world without the illusions and inequalities on which our current society is built. That said, if human beings are delusional and their only interests lie in those things, then having them suddenly taken away could indeed cause many problems – but the main problem would not be SI, it would be the humans themselves.
youtube
AI Governance
2025-09-06T07:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzRNQZL06vSMbgaGEx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuVoeUQ3enGFu98NF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB8jHrrn3kMKK-hHp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo4d6kI-caHNx7TAl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeSx54XaCieK0hNuh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8azviIFxuOoEZs1R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyWgdtJqMDpn-NYGhR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugys-KJE2ePVkH2tWMZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzuSAViUNLAWoThByV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYGnRvOrVlYdXFWih4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]