Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ima just throw this out there in video game terminology something that I've noti…
rdc_nmejgzz
G
the only reason my family has middle names, is because my dad had a scare around…
ytr_Ugz6nsaYc…
G
Bs! What profits? There will be no concept of profits with AI and robots. They w…
ytc_Ugy_vd8Mu…
G
no because ai art isn’t are and it’s not a “time saver” anything worth doing isn…
ytc_Ugxc-6U8e…
G
AI has been tested and used on us for some time now. Even the disappearing pope …
ytc_UgyWLjuJa…
G
In a scenario where the human race is controlled by AI , surely the only way to …
ytc_Ugzs5uihW…
G
Could AI automate pretty much everything? Sure. Should it? No. Will it? Probably…
ytc_UgxwZvHBE…
G
Dawg it's not hard to use AI. I'm sorry, you're not an artist if you do so. Pick…
ytr_Ugwcq3zEK…
Comment
My perspective is that it is unethical to starve people deliberately. Creating a condition of "Ohp, we don't need your services. All poor people will now be without any means to feed themselves." is unethical. So, as long as the "get a job you lazy bum" attitude is in effect, replacing all jobs with robots is unethical. Note that this changes if we are able to move to a post-work society in which people can still feed themselves without having to have been born rich It is not artificial intelligence itself that is unethical. Rather, it is how it is applied that is unethical. Unfortunately, the people likely to decide how it is applied have a tendency to be unethical.
youtube
2014-09-17T02:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjpM9su4PUgOXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ughl5qc5S__IyXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiEfPymBkOFwngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UghGUSpj2mi6B3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugj89ulpyU0Cn3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiTmXK1IfcrL3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjbtOR2O7rKYngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjx6F4Lrk3qFXgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UghaqHhw_KUningCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgidYWIgHmWVzXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]