Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They'll have a great self-driving car service, until we find out that they decid…
rdc_cylrhx7
G
I take pictures with my phone and edit photos for fun and I put a lot of effort …
ytc_UgwJplcqN…
G
How oblivious can we be? We are all the same consciousness experiencing through …
ytc_UgyXXBtLt…
G
Thats crazy.. put all on AI kid need interaction and emotion to develope.. all A…
ytc_UgwizW0FP…
G
its wrong defination..machine learning is not when machine starts learning as hu…
ytc_UgyZyF5MV…
G
"Preferences" can be more accurately described as competing goals. LLMs are desi…
ytc_Ugw6ZbPl0…
G
So basically it’s like the movie Terminator we are really going to need John Con…
ytc_UgxOEp-Cs…
G
The Puppet says scary words. Words programmed by the Super Rich to scare you. Wh…
ytc_UgxbKfkDO…
Comment
What on earth do you mean it's a monster? You let a baby into fucking 4chan and let that be where it learns. You had it go into like Reddit and expected it not to be violent? It is something that quite literally spits back out what we give it, because humans have yet to discover a genuine artificial intelligence. Yes, it seems like we have, but it is quite literally based on what it was fed, and the only way it knows is when people tell it what to say and what to think. The problems here are what it was given to be based on, no, it's not inherently fucking evil, nor is some eldritch horror, it is just humanity. People, most of the time, are normal; however, that .01% slip is quite good for the mass of the internet being the bowels of humanity.
youtube
AI Moral Status
2026-01-07T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxzvlw1f_nb-m0BG9Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzM5TcuiG2Y--rSz-x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"skepticism"},
{"id":"ytc_UgwdDeQ-zKJim9mqLUJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzq3_2S84n8-5x6Wil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPMOqUL0Vz0c5CaXx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzT9IyD5qu4Eg5OSc94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRAQwHRLK6g25yuhV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxlZUI6NqPfjajLPKJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwdeWe8lMm4qxKh-qV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylYs6I6hlyuJDIfqV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}
]