Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
think the answer pretty simple, the better choice is for ai to fail as it wont r…
ytc_UgzNZhv68…
G
If the ministries of culture around the globe care about their countries artists…
ytc_UgxZkomH6…
G
I just went to a Punk Rock Flea market last weekend and the amount of AI work be…
rdc_nugfqfq
G
I'm surprised to hear her downplay the nefariousness of these big players in the…
ytc_Ugxm3-WlM…
G
Me over here building AI tools that would replace my job to make my job simpler💀…
ytc_Ugx-w04Kj…
G
It seems are fear of robots and AI taking over the world will happen. It seems a…
ytc_UgwXCtGu0…
G
My ChatGPT gave the following answer:
Hey there — great question, and I appreci…
ytc_UgyIK9KIQ…
G
The problem isn't the AI it's human morality that we try to inject into it for m…
ytc_UgzididNs…
Comment
Jonny I think the concern around it is that a human mistake or deliberate crime ideally goes in front of a tribunal, but AI gives a layer of opaqueness to decisions, and no one but the guilty want a world where butchered civilians can be regarded as an algorithmic error. Additionally situations like the B-59 sub in the Cuban Middle Crisis didn’t result in nuclear war because 1 Soviet Officer decided that obeying standing orders was not worth nuclear fallout. Humans have a conscious, AI does not, and that makes all the difference.
youtube
2025-02-03T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwbBoHGRX7DwdHxulJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyXI2OqKHocyhYemPp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQ9_-fsUBF5Dcl-Et4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkTg3NXUwDiTVElmF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQMH0Y8tJ-nIIGHsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBJKursnqvCpsEd0t4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgyLqCcKPQqxQ-q2Kh54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzLRmQI_y9tI3_0tW14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxfFkU_lNkDB3ELdSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6s-Mtq28eJsYkjc54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]