Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
…enjoying the process? I’m a hybrid of a human and AI artist, but I still see no…
ytc_UgwaLuU9H…
G
We need a modern day Sarah Connor. AI is just doing more harm than good to both …
ytc_UgyBrcSCt…
G
Because the AI is very capable of non-toxic glue to be or not 9308sdfiodsjflkwer…
rdc_mjx76fo
G
I'm a musician. I'm not a great one, but I have been writing music for about 40 …
ytc_Ugww5HwGg…
G
@Nitesh22021990ai generates easy human friendly answers and notes which from whi…
ytr_UgwAz8v6U…
G
I like you actually hypnotised it, did that myself a couple of times, since the …
ytc_UgzqDNYgq…
G
This is the HITLER REGIME all over again. Hitler did the same thing. Before you …
ytc_UgzlvOdWM…
G
Dont blame the car blame the dummy who should have thought twice before he put h…
ytc_UgxBYqSmP…
Comment
Intelligent Man or Machine?
1) AI is such a very impressive improvement by the 21st-century modern man. However, would AI reduce or increase many more already modern's man problems; improve all individual to become better oneself in all meaningful aspects, (of happiness, longevity, and immorality), or merely makes their life a lot more easy, convenient, comfortable, and laziness?
2) Ultimately, if AI could not help improve all individual's health, intelligence, virtue wisdom, morality, and longevity; then why should they put in so much work, hope, expectation, and promises? altc
youtube
AI Moral Status
2020-12-17T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyGCelw8nalsFq3r_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgygeYwoQXytjpOEkD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwz7Un5hSLJhOeMDoJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzqwwN9at_l_Y-G8aV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwFlL0bGpGptH4cuq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzA9H2eIslxhxjHWXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzo2AKqyFqTuexjkaR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxh47jYaCAGF6NDo_d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFPoC6ZL0d--bnf3B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyp-T8EmQ_7RZb3Ad14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]