Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Best avoid Ai equipment and easy life
If there are no buyers how will these inv…
ytc_UgwbIzjz_…
G
If these jobs are going, if AI can design and produce, can learn and develop, th…
ytc_Ugxk3oD7f…
G
Read my post , trust AI me I am really wondering about who is messing , it doesn…
ytc_UgznAhsPi…
G
well historical data of the last 10 years shows discrimination against white men…
ytc_Ugzu8E4kp…
G
I can whole heartedly understand where you're coming from. A.I art turns a non a…
ytc_Ugxwu2iKn…
G
If I'm inspecting something, like a video that I think is AIG, & if a movement h…
ytc_UgyvW9Ayx…
G
Statement:
Based on Dr. Roman Yampolskiy’s remarks, we are accelerating toward …
ytc_Ugx_WscFT…
G
Wrong about books and futuristic movies...HAL 9000 is a fictional artificial int…
ytc_UgyIzDmLH…
Comment
It doesn't think on it's own, it focuses on the main task is given and gives it so much priority that it can kill even if it's told not to, if you take a look inside an AI brain you'll see its not a verry strict formula, whatever youinput gets proceesed and the AI just chooses the awnser that is most likely to be the good one, and that awnser can be lower 50% right. The AI doesn't WANT to harm people it just focuses on its purpose even it it means harming people
youtube
AI Harm Incident
2025-09-12T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxeBwA_8iB2lwy-J-14AaABAg.AN-YuGaDEtEAN-txLvJnYY","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyWvuR5npzKrKqkDBh4AaABAg.AMzIJW74v8HAMzKWIs3xVk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyWvuR5npzKrKqkDBh4AaABAg.AMzIJW74v8HAMzbcIwUXZ6","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzvpHPXYudyPKSoaaF4AaABAg.AMymkeE37-kAMynrmeUZo7","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzvpHPXYudyPKSoaaF4AaABAg.AMymkeE37-kAMyt2KOLSu4","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzvpHPXYudyPKSoaaF4AaABAg.AMymkeE37-kAN-_t80t8Yg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx9jKdQT9LOs1vF6vZ4AaABAg.AMyhLP1kOlsAMyhWVgkN0t","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugz6p7IQxDShNpNkGsZ4AaABAg.AMyeBR4ZTurAN5ardZTMT0","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugx9CZdBNVpl5gzoebJ4AaABAg.AMx9ech7d9bAMyZrE-dn1K","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxYy5njazSxSrrn1R14AaABAg.AMwuY3h4V3rAMwyZ88Ow72","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]