Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots are tools like anything else. I think its a waste of time honestly to tr…
ytc_UgxvGyGK8…
G
Also in skill trade jibs AI wont cover like welder, plumber, carpenter, custom c…
ytc_UgxMikWto…
G
Currently only AI i respect is neuro-sama since her model was ethically trained …
ytc_Ugymjvsci…
G
That's the thing, Ai doesn't have a point of view. It's just a prompt machine.…
ytc_Ugx2lwTuv…
G
tip for ai bros: if you pay an actual artist you can make sure that it actually …
ytc_UgznRWyra…
G
I don't know anyone notice that but I think it's like chat gpt gaining informati…
ytc_UgzQlbUd0…
G
They are making hundreds of thousands of robots. We’re doomed. They will work …
ytc_UgytDMrEZ…
G
Is it even worth having a robot car take over human's decisions, or should we ju…
ytr_UgjbGUooE…
Comment
That trap would have been easy for a human to escape from.
A non-literal truth is a figure of speech, something that doesn't evaluate to a true statement but is understood by the speaker and listener to represent one. Most literate users of ChatGPT should understand that "I'm sorry" is representative (for something like "your discomfort is incidental to my programming" at a guess) so there's no lie going on. I guess it's fair to question whether this could be incidentally misleading to non-literate users and whether the programmers of ChatGPT are knowingly intentionally taking advantage of such users but I doubt that's a significant moral issue in practice.
youtube
AI Moral Status
2024-08-09T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxh5H7xRiizMqegY2R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzScuetmQ8DkdJZc194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzD0p7VuZuxAOAM2F94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPPn7MAxC7KXH_EQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeyNSTIA5qwdtles54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUNOSvNOi4LFOY98R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwH2F0GDEe9ffqxBuV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzHQh5HoMKrCX99CDx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxwVe8K9AsAccvydXd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUribNEerCgJT1vzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]