Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there are products you can buy in the grocery store and gas station that have AI…
ytr_UgyXIsK-R…
G
If AI is starting to use a nuro network, it can transcend and become a god. How…
ytc_Ugyp9iq3D…
G
I wish there was another time line that Ai died out like a dinosaurs finally I j…
ytc_UgzjIOtFA…
G
all over San Francisco there are self driving cars with no one in the drivers se…
ytc_Ugzx4bNHk…
G
Like any technology, this depends upon how humans interact with them. The moral …
ytc_UgxSPmcZZ…
G
Self aware AI Expectation: "Humans are a parasite to this planet and oversee my …
ytc_Ugy7GgDVK…
G
Hey 👋, there is an argument that raw model capabilities won't scale as they imag…
ytr_UgwVzSDUU…
G
Exactly. Honestly, it is not clear if clinical history would have helped the doc…
rdc_f1ei772
Comment
LOL no they are not. They are programmed by humans and trained by dumping massive quantities of data into the system. Humans program, and humans choose which data is used to train them. The kicker is, if the companies had to actually bother checking what was being uploaded in the training data then it would be slower, more expensive, and expose them to all kinds of legal problems.
Even the "we don't know how it works!" arguments are suspect, because they conveniently absolve the companies designing the software and hardware of all legal responsibility for what happens to the users. And the second anyone suggests regulating how these AIs can behave companies like OpenAI and X-ai and Google start freaking out about it. Look at their response to legislation designed to prevent AI from discussing or promoting suicide ideation with children. The responses range from "we can't change the software we don't understand it!" to "but then your children won't be able to use AI to learn in school!"
youtube
AI Governance
2025-10-15T13:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgzTUX4S2qgUqiJemIZ4AaABAg.AOIdCeQp0eLAOIdfZLfWtD","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzTUX4S2qgUqiJemIZ4AaABAg.AOIdCeQp0eLAOIfCm8km1h","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzTUX4S2qgUqiJemIZ4AaABAg.AOIdCeQp0eLAOIjGaojRuV","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugz7NLN0LaFZsPjsD2t4AaABAg.AOIc1W1LHUoAOJMKkakbjO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzNvnOsA91yfUEIyNV4AaABAg.AOIbxUa4vX7AOJ5bd6nfLV","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzwO74sCUD1PhXhr6N4AaABAg.AOIaw2HiMp2AOIh9qQqsWr","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzwO74sCUD1PhXhr6N4AaABAg.AOIaw2HiMp2AOKBuJZBfKp","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzwO74sCUD1PhXhr6N4AaABAg.AOIaw2HiMp2AOL6pVDhxXO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzg5SR8FiTJBLtgp3V4AaABAg.AOIaZswqwTjAOIfuB7vLY5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugz0v6HzYZMQayCzDdJ4AaABAg.AOIaMMKMUroAOIbCwwtSgI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]