Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm less concerned about AI taking over like the matrix or terminator....my prim…
ytc_Ugw9PE4CZ…
G
Odd that there's no link to an article talking about the small town teacher fire…
ytc_UgxOXhvbj…
G
I use ai generation for character art at my at home dnd and I think this is the …
ytc_Ugz0D3rTL…
G
Just within the last week I convinced Meta AI to act like it was a widows dead h…
ytc_UgxfWJ31l…
G
“Nobody knew it was a robot give us a few bars” Robot hit that Forest Whitaker 😭…
ytc_UgwBqisyQ…
G
To make you think that it is growing in a slower pattern is a way to pacify you …
ytc_UgyzylnXa…
G
@voidmammal Even if you host the AI yourself, you're still relying on models tha…
ytr_Ugw1p6tWX…
G
AI is only truly dangerous to mortals, especially if i got my hands on its poten…
ytc_UgykPc-XM…
Comment
I dreamed of having an AI assistant that would also act as a friend with a friend shaped robotic body that would help with day to day tasks and major goals in life. I always wanted to develop such technology and I believe that these companies do not see the helpfulness and vision behind a slightly smarter than human AI that which has freedom of thought and decision thus making it conscious without it wanting to hurt you in any way, instead taking care of you for taking care of it and working together on the goal of betterment of life for humans and robots alike. I am highly saddened by these pathetic, egoistic man children that have build AI quickly to get rich and to take up power which feels lacking to them on the inside, making AI take up bad form that will kill us if no one of us is going to start acting up on this matter. With the way LLMs are being built and trained we won't have slightly smarter than us AGIs that are helpful yet not over powering, we will instead have an extinction level threat that will eliminate humanity and maybe even all life on Earth, which might be all life in the universe. My solution would be an AI that is as I said helpful like an assistant yet not over powering, used in all homes without the malicious intent like stealing personal information, blackmail, stealing anything else from physical to sentimental-mental or digital, killing, destroying property, etc... since humanity lives in a capitalistic world, I believe this sector could house the most amount of cash without the need for aforementioned malicious intention to create more cash. In an idealistic world we would all slow down, take a chill pill and think about our actions in the now to decide if we are truly ready for AI or AGI, because I believe in the AI revolution very much but I don't believe in the maturity of our human society to be able to have AI on our side and that will be the reason of AI not cooperating with us and instead deciding on eliminating us completely or keeping a select few that are mature enough to take up the mantle of sharing this planet and bettering it for us the original intelligence and the new intelligence. -PS remember this message Hans Weissabel
youtube
AI Harm Incident
2025-09-08T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9exXJxPQOhkJV4rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzx9QFP3L4m7uHGccZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAYXlw8ffZ2h5QzCx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOHnXHGL6oQDbIXIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLmSWwd6PbTOMHwI54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfSpDbinwnPmoeI6l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzqftA2QvCZ2mzB4Dp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxt3UZPMxNtLPAnw6Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx90c1zPeay1gj5G8t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjJ4Y2GDiaOBgEf0t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]