Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just think. This guy has invested in every profitable venture for the past 50 ye…
ytc_UgzPE26lC…
G
At what point exactly does it stop being just regression analysis and start bein…
ytc_UgwdxvIcf…
G
Hey there! It seems like you might be referring to a popular science fiction ser…
ytr_UgwlV-BLr…
G
"Everyone is chasing AI tools — but who’s talking about the price we’ll pay late…
ytc_UgxmvKxIB…
G
I’m a disabled artist. My disabilities hardly even affect my art. My hands may h…
ytc_UgxwD2mBK…
G
Is asked(for a friend): How do I increase my stamina in bed
ChatGPT: To increas…
ytc_UgzeWX206…
G
UK people are already speaking openly about it. While here in Europe, there are …
ytc_UgyZmoyDm…
G
I did the cartoon network scripts on chatgpt, and they were so distressful and s…
ytc_UgxZlNmr9…
Comment
The thing is about pain. There is no reason for us or AIto programm itself to FEEL the pain, it will make sence to maki them sence it but not suffer from it. The fact that that AI will thin something like "Kill all humans" is kinda exaggerated. It's not like you have to limit real functional AI but you need to make it to be able to learn and make itself better, to gain "experiance" on it's own but in one role like auto-mechanic but not making it to get in a delema like:"Is it worth fixing cars over and over or create perfect car". Give it like half concience so AI wouldn't think "outside the box" and just carry on with his linear duty.
youtube
AI Moral Status
2017-02-25T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggjVwD_ZOZMU3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj3D5WE37wArXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjFrM_C2E3oy3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiDVdRVVHdhxngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjniVw29kIWbXgCoAEC","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghkYCr9dpeSNngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugi4GssI78QMv3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugibwt-ypmpmDHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiPeApqlkTUO3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghvPk48U9EYcHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]