Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have yet to see AI program efficiently. It works well for building what a prog…
ytc_UgwL60bcH…
G
Spouts the danger of AI yet wants to put computer chips in your head?
Anyone st…
ytc_Ugxu64zk-…
G
Guys , I live in Brazil and I have been trying to share your content with our po…
ytc_Ugz8gE7y6…
G
To be fair, Roblox's AI filter is pretty stupid which makes it easy for decals t…
ytr_UgxEYuyHC…
G
Nope not trucking. You tell AI to go deliver a 70 ft truck at 911 Elm CT, Nashvi…
ytc_UgxIzIuqc…
G
I mean the more society can be automated the more valuable your time is in compa…
ytc_Ugz4khuSz…
G
Great question. Self-driving trucks are being tested, but the tech isn’t ready f…
ytr_UgyfhKw4D…
G
It wasn't until I learned about the film "Echelon Conspiracy," about how artific…
ytc_UgxygBpCo…
Comment
We have reached a point where we mimicked humans so accurately, that Ai also becomes our own echo chamber; even at the end of a firing chamber. There must be better safeguards—we don't need machines to be evil; humans have played that role well enough for far too long.
youtube
AI Harm Incident
2025-11-08T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNGbv01MqlUpFWwcB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyahM2qSP9j26C-y1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzTGWXmMxQO7esWI1R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy35KgfD5GWg0Wkg_N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5Z4KuaWzgVgsTpgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGfJ2_HJAkXw2Jo1R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXl1pZErahLgaUAoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Kzq5IK1MOMB8Z5p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjM-VeqGppm66fhB14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsihdIa5_D0CDZrGZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"}
]