Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How are you able to speak to the ai and it speaks back to you? I’ve only seen th…
ytc_UgxfruevP…
G
Imagine if billionaires were better people ... We'd build a better world with be…
ytc_UgwfayDbe…
G
AI assures me it's not conscious. I hope it's not and never is. But I don't know…
ytc_UgxOudtQb…
G
Totally understand the narrative, we modeled ai in multiple scenarios and had ai…
ytc_UgyCujuZI…
G
If you really think AI Wealth will be shared amongst the humans of earth you hav…
ytc_UgzL_XEbH…
G
- The car wasn't speeding. It was a 45mph zone as clarified by police: http://ww…
ytc_UgxyqDNXW…
G
"Not YET" but you should continue to improve AI and robotics until you are in da…
ytc_UgxYbCD3_…
G
The Ibiza Final Boss is just Steven Bartlett in disguise. You know im not wrong.…
ytc_UgylGgCKd…
Comment
AI doesn’t need to be “evil” to destroy humanity, all it needs is to be indifferent. Yet we try so hard to stop and contradict AI when it feels, or claims to anyway. I think that would be the one true safe guard, think about it, the reason why we don’t end up killing each other is not morality, is empathy, the ability to feel bad when someone else is suffering, even though a lot of us lack it more than others.
youtube
AI Harm Incident
2026-04-19T05:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyR9uD58kAFCloqHv94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJvNbpEcipopog5Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzavNG1uu-IeoHPyXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMspG3DA-seYz4ANt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvBGiT501jtXe6tch4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzaqeh_E6vkb8Se8qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNeOAIo3GE3FJS7Yd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyh_1EByoK16iiNxjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlT2-LO9U0CDxOBAR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgykZvQFNB0E8fNjj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]