Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You are too optimistic about human nature, thinking that once the AI is better t…
ytc_UgyqDIZ37…
G
Driverless trucks are a huge mistake; in fact, anything automated is a horrid id…
ytc_UgxQivGUU…
G
Wow. I thought ai art was just random but no. That's not the case. This could pu…
ytc_UgwzhMRgp…
G
the time saving is not real for most people. If you save 1 hour of work and use …
ytc_UgxfEDcdH…
G
AI might be able to make art, but can it roast art babies on Discord?…
ytc_UgwQBe5jE…
G
Red jacket guy:
"People are transitioning to training AI...
There's gona be a m…
ytc_UgyaBe4iP…
G
why would we want AI to be conscious? What does that have to do with objective i…
ytc_UgwondU_l…
G
I don't think it's accurate to say no one cares. AI Safety is a huge field. It's…
ytr_Ugxerun33…
Comment
If you believe this, I've got swampland for you to buy. Look folks, let's be clear, AI will ONLY do what it was "programmed" to do. That means there is ZERO chance it can "overtake" us or "hurt" us UNLESS someone is providing not only that programming but rules of engagement. AI doesn't "respond" like humans, it has protocols to which it adheres. So if there is any chance for some bad AI Actors than it was programmed to do just that and likely was given the command to begin.
Did you understand that? It was GIVEN THE COMMAND TO COMMENCE that bad activity.
So stop with the predictive programming crap. Know when AI does something, that is the time for you to blame your government and act accordingly in response.
youtube
AI Governance
2025-07-15T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB76rPksp_uOBybfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxOaORPAr_VTsoss14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiqP_RpXu5hV4n3XB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKzIa7jSAKG6Cnq8R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxMEY9iX-t3xUEagfB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzbBH0eRWhhkbQScSJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw5a62sLYR5Y24LW5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0HMpxU1deKcSeG3d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxp6VDXFm2o3OcX4vF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJSe0g8ngq7u7LbK94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]