Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Gave me confirmation the government is racist because they are programming AI to…
ytc_UgxRsuAaC…
G
I always question and worry about what we are already being fed as being real, b…
ytc_UgxjtUl6d…
G
Alternatively brain rot humor and slang is something AI will never be able to co…
ytc_Ugx1CxKZm…
G
@scvscades
Anyone who knows ai can copy what you do.
There is a reason why s…
ytr_UgxqYSF3p…
G
AI and UBI like alignment ? look at Anguilla's AI domain name sales funding 50%…
ytc_UgyHSgJ6U…
G
There is no "AI" as of yet, "AI" now is just a buzzword used by youtubers and wh…
ytc_Ugxz34QJ4…
G
Waymo uses a bunch of different sensors that many times conflict with each other…
ytr_UgwEdd57k…
G
@camlinhall1363 an answer like what from who? I was talking about this so called…
ytr_UgzuRHULk…
Comment
Autopilot shutting down a second before a crash may sound shocking but it's a paradigm that's been around for decades. On Airbus aircraft, the computer that exists in order to prevent human pilots from "misflying" the plane is designed to throw in the towel and surrender control to the pilots in the event of systems failures. Yes, there have been occasions where the computer has prevented accidents, as there have been times when Tesla autopilot has prevented crashes. But that door swings both ways and law makers and manufacturers are in denial about this.
youtube
AI Harm Incident
2022-09-03T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx8moUl6EJZkFS7P714AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0AP_Pp3vlulMJqtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-yOEoBIQp4Qzjy1N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymMIWpL9GSaG1-2wl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzxqt7bVMnYuRUZLd14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwUVfwXku90xKirGgJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz2DvIMOu6H14tQOeF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzreR0wjmEax0vs9254AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyN6Oth4LTS1Ixa_Bl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAiAEwArnDsRwwQCx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]