Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i mean will they though? We hear every day of some new major protest for climate…
rdc_f1w2kl5
G
@brrfibgtun LLM chatbots are merely statistical language models, useful for lang…
ytr_UgxyUcVoS…
G
1. The butter robot at The begining from rick and morty
2. 1:14 Bmo Reference
2.…
ytc_Ugx7Lk0ES…
G
A.I. her eyebrows are not the same colour as her hair and many blonde people don…
ytc_UgwYPqGnc…
G
As someone who loves art, this is genuinely so infuriating to see people using a…
ytc_UgyxTLDkB…
G
We are creating a new species that will not only totally control us, it may no l…
ytc_UgyoTRfX2…
G
Crypto ai bros are corporate wannabes who rationalize ai art by calling it "just…
ytc_Ugz4bJRnf…
G
All of these happened on previous versions of Autopilot. Tesla has since redesig…
ytc_UgyHh28Ve…
Comment
I really really still concern this.
The developer should not put emotional statement come from the AI. since only human has emotion in this conversation.
Instead saying "Sorry" (which means, I regret), It has to be "I applogize" (Which means: Oh I just realized I did wrong. Can you give me your forgiveness?).
Your confusion, is actually the language barrier. AI language barrier to explain to you, and your language barrier to really understand what AI means from AI perspective.
Language problem, no philosophical thing involved.
youtube
AI Moral Status
2025-03-10T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxZ_ueLYOUaaSLnLDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFYW1sLUtdLvjCOjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugydd-iw7tXAP-kdt8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugx4B0MW9ZHbf7cWLll4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxRktD0CueUhw3WhMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwQOcMKeWV0bIu36Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugws1ehdaLN1lhygl1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgzcNmjutdXkOOo8cm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy4YNwId_GlRbYKPV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyZ8X13BHsy0bhoy8Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}]