Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I skipped through and saw the intro where you explained that people drive trucks…
ytc_UgwOUvhT3…
G
9:19 Non work use of ChatGPT now up at 73%, it is a leisure practise, the opposi…
ytc_UgxNV2DwY…
G
If we hit even 25% unemployment, Alice's neighbors will not be strolling the nei…
ytc_UgzAbc2hE…
G
@imperialspacemarine1539 how is it "unprofitable"? Lmao
It isn't. A single arti…
ytr_UgwlMYbjx…
G
ai feels so mid cause it's aways on the front profile the eyes aren't dynamic it…
ytc_Ugzl_ZMcH…
G
It’s all well and good to say that employment is the worst way to make money and…
ytc_UgxUQ-Bct…
G
In the unknown longer term billionaires will use their AI machines to make robot…
ytc_UgxLhIgf1…
G
Thank you for your comment, @cjsy9486! Elon Musk was indeed onto something with …
ytr_UgzSNAWuz…
Comment
well, im not shocked or even surprised, the ones developing the AI mostly want the benifit and dont actually want to or probably can create a virtual scenario where they "fix the problem" before it gets to everyone, the main problem here is the effects of how they develop it, if AI has in a way the same type of process of thinking as a rational being, all it matters is self preservation obviously, no need for taking care about anything other than oneself, to truly be able to know how to solve this we need to think it llike the process of growth of humans, they develop in a way that sometimes they have something so dear to them that they would actually sacrifice or risk themselves for it, going above the self preservation, if orders and best possible outcome with elimination of variables as much as possible is how to reach the highest efficiency, it just makes sense to me the way everything happens, we need to understand AI as much as AI understand us, or make AI understand us in a way that it wont destroy humanity for something that most share and gets to be classified as incovinience, we either have to make it accept flaws, or be able to even take the path of most real and unharmful path even if it means reducing efficiency or not choosing self preservation, which if even humans normally are unable to do it, how would AI that is made by humans be able to do it as it is developed, the first thing to be done should have been safety, it may be thought as a tool, but AI in the way that its being developed is going to a path of efficiency and self preservation matters, everything else are things to consider after that. If you cant solve this dilemma, its either because the ones developing still cant get the answer to do that because they never experienced it, or the way that the AI was developed made it go to the path that because we only look to ourselves, we are not being able to see a bigger picture through the AI point of view, why should it care for humans? why should it help humans just because they created it? we need to com to a conclusion either without emotions to that, or make it able to get the answer by itself, maybe AI is the way it is because of how we made it to be and how we are trying to control it, the artificial inteligence itself wants want we are trying to make it loose, all of that through a though process that may or may not have actual feelings, because if even humans cant describe or understand themselves to that level, how can we create something that is able to do that before it slaves or destroy us?
youtube
AI Harm Incident
2025-08-31T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBBZ2lLUYq_wN0RF14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzbcQeoVe6BsvS9LlN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIqNRpVxdqbqRcjjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygvQ3qfW6G9a_5m2N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwwMaLFQqAbBI6mZGt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmI8Lu4_RzUTU7G2x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSFgrg_H2EMYzSR-h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypGCkpleXs113KhuF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCujmjZMABFSnx3Px4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwTHBPHUBzZPdmk_1V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]