Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They should be in jail and sued an adult AI character being sexual with a minor …
ytc_UgwYv6OTp…
G
i retired from water treatment and both that job and your job, sorry to say, are…
ytr_UgxXnGhL1…
G
Wow, I can't believe how many tools are out there now. I recently started using …
ytc_UgxlfpNCB…
G
Yeah, real paid Therapist will share private stuff with their friends, family or…
ytc_UgzbGFwTr…
G
Thank you for your comment! In the video, the presenter interacted with a robot …
ytr_UgxwPrZNd…
G
Here comes Skynet. I don't think we will make it to Mars but Ai robots might.…
ytc_Ugz7DCi8D…
G
What if the next people were also ai art looking at ai art looking at ai art. 😮…
ytc_UgwfhIZLS…
G
Here is my 5 cents on AI pesonhood:
I honestly think the debate about AI personh…
ytc_UgxOMcz4E…
Comment
If humans are inherently violent, and AI is leaning by digesting out history, how do you stop it from becoming violent? Sociopaths dont have emotion, so emotion is not necessary for violence. What we really need to do is not let AI CREATE new thoughts. An AI that can control its own "body" and do repetitive chores around the house is not the same as allowing AI to develope a new drug to cure cancer and trust that it will not also create a pill that can kill us.
AI should be given guardrails. AI should be coded at an unchangable level to follow a set of 10 or so commands that create the basis of moral thought. I'm sure such a list already exists.
youtube
AI Governance
2026-03-26T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwvdKWnPjUV9f-tZVt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLc2II-NW1KhMrKkp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHjwe987gHpF5nQw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyfyEdq_Mgc7Q7gLYt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyXnOWFAraynJwSlV54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy83ApDK18PwGIOdmx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwlp0QaZilJG2rE7PF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwD0HmezmjFqXNF2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzoia8QRE4uMWwesCF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy8AP0DU5HWPwxhdgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]