Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think Sam and OpenAI should pull their bootstraps and stop relying on these go…
rdc_lp7gdfl
G
Lethal autonomous weapons and no bags coming home - the current war in Ukraine s…
ytc_UgwOOPIBB…
G
This is some Minority Report I robot type ish Well. Bring on the Jetsons 🛸…
ytc_Ugwyw7Eu_…
G
What most of these doomsday scenarios seem to miss is what would be the AIs moti…
ytc_UgxioKzRz…
G
If you are against Facial Recognition technology in the hands of the governmen…
rdc_ekukot1
G
Why force AI to fit job descriptions, when you can redefine job descriptions wit…
ytc_UgwzMya5t…
G
What if humanity "goes rogue," wakes the eff up and realizes how powerful humans…
ytc_UgzOi96Lm…
G
It would help if people would stop reading AI as "Artificial Intelligence". It's…
ytc_UgwjK5kyG…
Comment
As someone who works on airpilot systems in aircraft I can say that even in the most technologically advanced models there are still costly mistakes. This is even with pre-planned routes, known taxi ways, runways, and mandatory radios that always report position (lat, long, alt, airspeed). So if these systems fail given the high amount of investment and oversight, I wouldn't by any means rely on the baby tech that exists in automated cars. Also given the insane variables that automated ground systems deal with, I would be hesitant to trust a system without real ranging capabilities.
youtube
AI Harm Incident
2022-09-03T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxLKj91_yUZcmtzKP54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfuReyxukU6hInDXB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzCvmcpbSIrarILhTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVN0ZCKCao6_Zjh414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkerxGD8MNCT62-Vl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhqePNsfSq-qGeLD54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgKW9FglOYf1rTYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3hys9fA5p-pXqASB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUQtiA2BozN-OGUjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwuu7-b-u-guWrgrxl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]