Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah. When I heard this story at first, I was like "okay, putting bromide into y…
ytr_UgyhqIeN5…
G
The thing is the ai that would crush humans to complete their goals would also c…
ytc_UgzlMdD9Q…
G
publicly traded companies number one responsibility is to the share holders. So …
ytc_UgwNMOTMD…
G
I always felt a bit uncomfortable seeing people using those "robophobic" slurs, …
ytc_UgzEVCLRl…
G
I can't see why a super intelligence would remove all life on earth. How would i…
ytc_Ugwmdfk0X…
G
Thanks for the enthusiasm! Sophia definitely has that futuristic vibe, doesn't s…
ytr_UgxWOIlI_…
G
When this disgusting AI shit starts affecting more people, you wont be saying th…
ytr_UgznHsY-P…
G
and if theyve then got a lot more time on their hands, maybe they will trim thei…
ytr_UgxOz0yGN…
Comment
14:40 I’m disappointed the point is to defend the legitimacy of AI. Just because we historically release technology before it’s safe, doesn’t mean that’s what we ought to be doing nor does it mean that’s what it looks like to care for your greater community. Instead, it seems time and time again we accept prioritizing ‘advancement’ at the expense of human bodies and lives. Meanwhile, the only reason tech is ever made to be required is because it’s forced upon us top down for the monied and empowered interest of the status quo.
We do not NEED AI, and certainly not in the way it’s been given to us, which is less the half baked and at a massive risk to our entire species
But sure
‘With every technological advancement there is risk’
God forbid we look twice at that statement and its implications on our values
youtube
AI Harm Incident
2025-11-25T02:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwb2y1ZHK3RiFS-4jx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznJ_7zXkpthK-JWQ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIuvaHFCYH-O0bXtp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaVZS44NOYSlBfu5p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK0_bZ8qOBrR3H-HB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxO3LSrJlkOWZHcN-Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwR68zZjgweFDw_tVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjR_qcClpYhNKYY_Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyUouiBNq1kw1bx-7p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxHglo5rqCEW7wXSQB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]