Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
BC you can add 5 million senders all you want but if your software/ai is bad it …
ytr_UgypDcMxF…
G
thats true, people know that AI will destroy humanity
yet i dont understand why …
ytr_UgyfG1LjQ…
G
People are taking AI serious. The question should have been, when are people goi…
ytc_UgyYHUxH8…
G
This reminds me of whenI saw that a certain comment said that digital artist can…
ytc_Ugz7N5Tum…
G
AI can be used however you want. You can generate images and whatever. As long a…
ytr_UgzQhLhju…
G
Interesting observation that AI can't provide human experience. But that said, i…
ytc_UgyXdQ5gu…
G
This chatGPT discussion was also done the other way around... and suddenly chatG…
ytc_UgyuETEKQ…
G
„Wouldn't it be great if we could get that work done for us, and we could take t…
ytr_Ugw62KHKO…
Comment
@A1Authority So? If you're on the better weed then you tell me. How is it bad to ask for the consent of an artificial "intelligence"? It may well be sentient or not, that doesn't matter. But it is intelligent nonetheless. So the speaker is basically saying we should ask for the consent with a highly trained programme that we call A.I. That's another way of training the programme by making it ask for the consent before any experiment is conducted on it. To me at least that is profound. I never thought of it in that way. He went there after talking about "AI colonialism" and "end of culture" as we know it. Philosophical? Certainly. Scientific? Pretty unlikely at present. Impossible? No friggin' way, given the speed of its advancement.
youtube
AI Moral Status
2022-07-24T05:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyYQ-mtLbwEAK87XTl4AaABAg.9dSS38eu94l9dYjRY3ITsS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyYQ-mtLbwEAK87XTl4AaABAg.9dSS38eu94l9eFhwc8qoTX","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugya3tqbuwEHOARz0IV4AaABAg.9dSBBrMp0Pp9dSLIfr3H-h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugzw8Zh1uuj4MJf-c3l4AaABAg.9dRfjg27vtR9dqchK8rPRc","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyD9liWXv9UFIVG7GR4AaABAg.9dRKUBQibrC9dr2y4ac6cd","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwKMSN9LkVtDODvnS54AaABAg.9dQvsuYgiyt9dR2p94XSPr","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwKMSN9LkVtDODvnS54AaABAg.9dQvsuYgiyt9dRsflDFBaO","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwOXYBiV-TpNSCfeLh4AaABAg.9dQfJa9BN_f9dQjLzH3dfp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugz3xd6azCtoCA4jiWV4AaABAg.9dPwCtH0UPW9dQr3vptPb-","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytr_Ugz3xd6azCtoCA4jiWV4AaABAg.9dPwCtH0UPW9dR8tm9seCL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]