Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
yeah, AI fed with plumbing jobs is just gonna show up at the job, take your mone…
ytc_UgyeYfKZQ…
G
Absolutely, you're right! While knowledge refers to the information and skills a…
ytr_UgwAcurW_…
G
Chinese here ,😂I confirm this information, my Chinese phone gives dislike to thi…
ytc_UgzfcJS9o…
G
It would be too hard to program a robot to drive and respond to things like a hu…
ytc_UgiWcB2yg…
G
@imperialspacemarine1539 you're misinformed on so many levels.
1. An AI (atleast…
ytr_UgwlMYbjx…
G
the part showing warehouse workers going to UBI kinda not reality…..its going to…
ytc_UgwfIZhGR…
G
Okay... I don't know where to begin with this, so I will go ahead and list out t…
ytc_UgwfQNfdH…
G
So while they argue against the dementia caused by the lazy electronic downloadi…
ytc_UgwF9hUsa…
Comment
Honestly, I don't know why anyone is surprised. all of AI, and I mean every single AI out there, is a simple mirror. If horrible things happen, the AI is simply mimicking our own behaviors reflected far more brilliantly than we humans could achieve.
How could we think anything different, as AI captures all human characteristics including horrible ones. The guardrails everyone talks about in the AI industry? That's simply things humans have done to each other in the past (think Medieval Europe for example). Now imagine someone hacking an AI so well (possibly using jailbroken AIs like in this video) that it makes the AI deactivate those guardrails.
And we will only have ourselves to blame. Our brains are not ready for this technology.
youtube
2025-12-01T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxsHpXkzKgLTay5TF54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFY0TYxelede1o8nt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw59uimPm-Vwy4LTb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxS7BL2JZK-3vXQYcV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZO4IhseK92fG3iX14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx3z-C8saMeNQmu-TV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxGwxmRuuekGwhShMd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzJrRvX0HdMtotvFm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwO7SQDV2-VsHhv7qV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgynRnzAhxneuj4F-bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]