Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI - is soulless. as long as he does NOT have a soul. as we know - so far no ai …
ytc_Ugyn9F_ym…
G
Bro another BS AI interview... Like all the AI "knows" is publicly available on …
ytc_UgzO-jlP2…
G
DIGITAL DRAWING IS BETTER THAN AI IT USES NO WATTER AND USES LITTLE TO NO RAM A…
ytc_Ugzl2q3Wo…
G
@aw-resistance9968 Now that I completelly agree, but the worst part is, no one s…
ytr_UgwhUsm1t…
G
This facial recognition is going to ruin a lot of lives. The intention does not …
rdc_ffg4n5k
G
Am I the only one who's concern the battle net AI was shockingly similarly named…
ytc_UgzwWwlIC…
G
In 10 years when AI fails there'll be a ton of plumbers and electricians competi…
ytc_UgzoS0YT8…
G
Elon Musk: "AI will destroy humanity !"
also Musk: "Look at my new line of rob…
ytc_UgzKX4v8z…
Comment
First these "godfathers" developed something, and then they are going around the world telling others not to use it. In both cases, their pecuniary incentives are presumably higher than their moral or intellectual ones.
This is not comparable with the Manhattan project, because then, it was a desperate time. In the last 3 decades, the desperation has been less clear. If they began to feel that AI could be potentially harmful, why couldn't they nib the findings in the bud? Such censorship is not unheard of in academia.
youtube
AI Responsibility
2025-08-24T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwMZH8P1lQVQh_mNzt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyTScDvE4XcW0Kpd5x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkrIt9P3qgAuYQhPR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypxRzHcrakoIov7WR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfRN3gLwry8nlfIuB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLWijgKdxuhN5eyzN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfMR2bajN4Y7_ewBZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvFLj2xKr_3Vv8Jxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4gWM8xuLihBr1Kkp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwLGxGCkp3XmPRt5e94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]