Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI won’t stop destroying jobs if basic securities that should have been present …
ytc_Ugyh93xW4…
G
I really hate 2 version of A.I using
1) people who make video with A.I
2)A F***I…
ytc_UgxUlXXH9…
G
I can tell it's AI cause of the eyebrows. It's just too crisp and clear. Even th…
ytc_Ugy8D40Ko…
G
As opposed to continuing to let humans drive drunk? Auto manufacturers will not …
rdc_dmpcz3s
G
That’s the scariest thing about that game for me. Once you remove the general sc…
rdc_gs63shw
G
Unless the technology gets an upper hand, and make human led revolutions outrigh…
rdc_kigbr9s
G
Google Engineers haven't talked to a woman in so long they think a chatbot is se…
ytc_UgxC9sAaE…
G
Finally! After all this time, I find someone who shares my opinion that AI art i…
ytr_Ugwc4pYeu…
Comment
about the alliment problem and "we have to be lucky alwayes they have to be lucky once"
i would like to argue that we have to be lucky once too.
let's say the first gai has a 10% chanse to be evil, but if it's not it will surely help us make the sceond gai, becuse it is smarter then us it will have only 5% chanse of being evil. the third will have 2.5% risk, the fourth will have 1.25% risk and so on.
at no point is the total risk over 20%. so we have to be lucky just once too becuse smarter ai reduce the risk.
we would also probaby start making programer ai befoe gai. they are simpler and have simpler goals "make program that does what the prompet asked". it's a lot harder to make them accidentaly evil then gai.
youtube
AI Moral Status
2023-08-20T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyT5i_a58y4WRBHedB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2mynRM8sQPVNKdx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzL8z-I5awF5dPscOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuMan507WxuZbwLTx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxPh7cdY6K4dSQP5rl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysVPV51E5cOgYl-6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9WOIdoXxKHBUDDA54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoDfoYQQxPgHJ2hhh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGLivzxlCJPHiTyTZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXVu0CJ0sxTCETJGN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}]