Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“I aM aN aI aRtIsT”
Translation: I am a talentless hack who can’t be bothered t…
ytc_Ugxa3xDph…
G
❤ use AI to elevate people. I only hate AI sent to kill people without human sup…
ytc_UgxTyEmdX…
G
Facial recognition matches witness identity her the only thing that saves her is…
ytc_Ugy4Zcq4W…
G
AI can't replace us without becoming us; the moment AI is smart enough to replac…
ytc_UgwfLMwSI…
G
The problem with AI-Stans is that they primarily view art as something that is n…
ytc_UgxE58YGC…
G
A.i isn't dangerous or gonna take over anything its a inanimate object it's the …
ytc_Ugxkay22J…
G
I understand your concerns! The conversation in the video highlights that while …
ytr_Ugy1yNPzI…
G
I've gotten to hate AI that's a shame but it really should have a law that when …
ytc_Ugze59smA…
Comment
Its about time we've started taking this seriously.
AI programmed by people with ill intent could literally be weaponized in ways that we, or even good that AI we create, couldn't stop an "Evil," well programmed AI, from doing a LOT of destruction.
Think of a hacker that has all of the knowledge in the world be available in a blink of eye. Except its a computer, and it quite literally has access to all of the information on any and every system connected to the internet.
However it doesn't have to actually "think" the way that we do. If there were malevolent intent, either programmed or learned(?), it could crash the global economy, put satalites offline, cause wars, end wars, shut of the power grid, among a plethora of other horrible things that could cause human suffering.
youtube
AI Governance
2023-05-17T01:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAn3VqUXZos_VDjTh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysrOVEkG-U3imPKMd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPnWLdytQE0Uh71TF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBzM3I-FgHYzgddTd4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJEAbjpSBuwKpepMx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzfntQxqQsQjRObi8J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwCYZ26KQI-YFGkX_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8QGiSXSxeAmOv-2R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzPngz8kwKyEnqFvbN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugydc9rwuRG3eaZvvSt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]