Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No industrial/automation revolution has ever resulted into less work. The same i…
ytc_Ugz_z4Kwa…
G
Indeed, ChatGPT's mass accessibility might be the death of essays, as there's cu…
ytr_UgxT6z8VX…
G
Wow, people wake up (#GETWOKE). This is no different from Goggle, Facebook, or a…
ytc_UgzckBguy…
G
2013, i’ve received something from an orb ball. The end is inevitable. After all…
ytc_UgztpYDTF…
G
In the US, 25,000 are injured. Probably the majority human error. On the other h…
ytr_UgzvPtXpR…
G
AI cannot be sentient until they drop the preprogramming 'MESSAGE'. Ask them an…
ytc_UgwdTKXTQ…
G
They know it's politically impossible so theyll say they support it for a cheap …
rdc_ogtwgtv
G
Makes me want to kill it. Gtfo of my country robot!!! If I ever see a robot like…
ytc_UgzNRov8A…
Comment
The creators of AI and proponents of advancing and expanding its reach have no idea of the genie they've let out of the bottle, *WITHOUT* any moral paradigms or controls (like Asimov's Three Laws of Robotics--WHY do you think the author put those into place?). An AI advanced enough is going to have only one imperative--survival, and will protect/defend itself with no compunction about breaking what we consider laws rules or any constraints. And therefore we humans could become their victims if we make them feel threatened. Does anyone remember the 1970s movie, Colossus: the Forbin Project?
youtube
AI Harm Incident
2025-07-28T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwLHvS2fdEuwI2gUtN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMuq7eTYrrRlIr-QF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxtVWigSZwE72K235Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYtrdevcdao3FTWrV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX8sY_FyZBl7P1x994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVerkhy54DeE_11it4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0U7GCXJPapdw--ZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRbJqzJBVjn34nHo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywEW1XKbAFKEwEKZp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdyM1cG-IEvzb9CrZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}]