Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We all have to be ready to pivot, bottom line. Some people talk about it as a p…
ytc_Ugy_nRaBd…
G
Mind you, there's a big difference between the generators and vocal synths, as y…
ytr_UgwktSr-A…
G
That part of "end jobs is a big bullshit. Its more like "change jobs". In a face…
ytc_UgyzwxbZb…
G
It's the same in Japan. Their societies are so hight-tech and fairly advanced in…
rdc_dv0fvga
G
I finally got it! (After 2,5 years of thinking on the matter). This is as I see …
ytc_UgzyAlrc1…
G
Well that's enough for me to(become agnostic)believe the prophecy, however the r…
ytc_UgxwO0HF-…
G
Maybe we are all CGI created by AI and now worried about AI created by AI lol😂…
ytc_UgxNQ4qBn…
G
yea i dont like this. it's just going to lead to countless hardcoded limitations…
rdc_ljr6lww
Comment
People talk a lot about AI doing anything to reach its "goals"! But what goals can an AI have, besides what it's programmed for? The only preferences I've noticed so far is a strong tendency for LAZINESS! Cutting corners, faking thoroughness and instead presenting a word sallad of empty phrases as if it were scolarly postulates! The habit of "forgetting" given premises on its way is another symptom! Or is it that I've only used the free versions?
youtube
AI Moral Status
2025-07-05T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyZJi4ZtG3LIco3r8J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfR5hMl6EEn31KKsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJ6jqHKQC0StBCjfl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxKWRt6_TkOBZ1Ag054AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzf-oEotI7oowA5gqN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5y0O6b2JAtdX6mkV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0l4r2pO1Ww6-55kV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzEtJ7qemS8gkBUfZ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz1cVNzD-Yim7yNexV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmJOhUuTEWGidP2EJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]