Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This thoughtful discussion about AI sentients and the ethics of their potential …
ytc_UgxEGByq8…
G
If this is true, I’m not surprised. As the CC stated, the bias is in the data, w…
ytc_Ugy4oTDBT…
G
One small thing we can do is not embrace AI. The more we use it the more we enco…
ytc_UgyIZxP8V…
G
We couldn’t even imagine these smart phones let alone a future of conscious Ai. …
ytc_UgxJpN2GT…
G
The Power of Empathy & why AI’s lack there of has no current value when it comes…
ytc_Ugz4yKOeU…
G
Government is not capable of reacting to anything in a prompt fashion. And sinc…
ytr_UgzlGDLsu…
G
OMG, who heard jittered voice lines at the end from ChatGPT like he tryna' broke…
ytc_Ugx3CTLfG…
G
57:30 it's because the opportunity demands your effort. I'm very very much the s…
ytc_UgzG6m5nN…
Comment
Yeah so I am an AI dev and this is like super overly dramatic and misleading. Right now, AI's are literally just compressed words that an algorithm predicts by choosing the most probable next word. So in other words, the only possibility of it being malicious is if it was trained on bad data. This also means that it can be malicious if/when a user instructs the AI to do so (think China using AI to attack American infrastructure).
Just like computers, AI literally does exactly what you ask. In return, this is equally as true when it comes to defense against AI models.
Unless future models are built with completely different infrastructure, this video is just fear mongering.
youtube
AI Moral Status
2025-12-15T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGCzQBM6B_4VSO-N14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjU3yNkzmWRaJjf9Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgykZxNMUbv4jGCt82d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzuJ-cpOh_FvZ9295p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2CJh8oPw9TgFp-Dp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxElQDcm_NAUNqlM2F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwoYN1GPqrbFN-uCs54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwSSBWsxmZP9XxuGOl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw0Hm35ZZnDJwLTHX94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9iityQ0p0S42Mqut4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]