Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lmao these ai guys really want to profit off of studio ghibli art and claim that…
ytc_UgyWKqZe0…
G
bro why is everyone talking bad abt david when they know they cant run a 62 bill…
ytc_UgzA0100B…
G
Because "making" A.I "Arts" Isn't your own, it's the A.I's, you merely command i…
ytc_Ugy3Fernw…
G
Crazy how it takes on the tone and speech. When you correct chatgpt, it just say…
ytc_UgyQSU-zK…
G
It is ridiculous to use robotic/AI to take over important jobs from humans. Ulti…
ytc_Ugw1V1n8w…
G
AI is good for a reference pic but it dose not fell like I am making art when I…
ytc_UgwqziM9a…
G
how much creativity do you need to write: "man and woman walking side by side in…
ytc_UgzBmGMO5…
G
I really think a humanoid machine robot from the future needs to come back and f…
ytc_UgxpAWSPi…
Comment
I've read a lot of comments here about what it's like to talk to AI. I'm a heavy user, and a lot of these comments sound like they're unfamiliar with talking to AI, even though they're confidently making their claims. That scares me.
Other things about AI scares, such as psychopaths in power that want to make a global control grid that forces people to be "good", put microchips in our brains, and have a transhumanist agenda that melds humans with machines.
You guys ever think it's odd how Musk talks ominously about AI but he's entrenched in it, apparently making huge data centers, and has a company that puts microchips in people's brains?
youtube
AI Harm Incident
2025-11-12T18:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwxsJv_8uIjxpxhvpt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJ5R415hFm3Z8ToxB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyrs2att-jg9iBUmJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzakWTn4J1zvihdRkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4mzEf1ltC4RQhRch4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytCiWip6F3HEehtKd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxH7hbPMuCWKWIfhGp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqGL05MJPRenqQyIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx3aXPedfGd5ud03gB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwnldmKcV2w5SurzYl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]