Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe cuz most of the world still racist 🥱 there are still sundown towns so... I…
ytc_UgzJTC0Ko…
G
Don’t take God (our Creator) Yeshua the almighty out of the equation, AI is not …
ytc_UgyDx3D2E…
G
Bunch that with already shitty images of people on ctv cams it makes sense for a…
ytr_Ugy7448Ce…
G
ai will not replace the front end Web Designer only ai knowledge people can rea…
ytc_UgzRasMHT…
G
Ironic, the retaliation of AI art stealing peoples artwork is to completely redr…
ytc_UgwRC82BW…
G
the robot mistook him for a box of vegetables because he probably had to use a w…
ytc_Ugw4r8Gfi…
G
As soon as higher taxes for the rich was mentioned, he panicked and shut it down…
ytc_Ugx7Q3LKS…
G
the only reason people are so against ai is because they know their job is done …
ytc_Ugyn4Y3cj…
Comment
@Fauxbourg
1. Post training is a drop in the bucket compared to pre training data
2. They are not programmed in the sense that Klein was implying. You can update the model. You can modify the system prompt. But you can't update the ethics module. You can't eliminate the bad parts and keep the good parts.
I agree with you in the sense that AI labs want us to think this is more magical than it is, but I also think it's dangerous to try to brute force our way to "AGI". You can't expect 1000:1 investment of AI development over safety research is going to lead to desirable outcomes
youtube
AI Governance
2025-10-16T14:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx4UUsl3Vr7QKPHPj94AaABAg.AOIfuoUhFPyAOIlWuMUu8M","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_Ugx4UUsl3Vr7QKPHPj94AaABAg.AOIfuoUhFPyAOIm-PxLGhl","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugx4UUsl3Vr7QKPHPj94AaABAg.AOIfuoUhFPyAOIuu6oOTui","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyE8PWRjmF_Gt9BXTV4AaABAg.AOIfqT9nhMeAOIiMVEaF5z","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxwvVkz3X4Q-_4-ScF4AaABAg.AOIeks4t9buAOL7JFWajFb","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxwvVkz3X4Q-_4-ScF4AaABAg.AOIeks4t9buAOLPR6XweGD","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxwvVkz3X4Q-_4-ScF4AaABAg.AOIeks4t9buAOLQE0QI6SJ","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugx4eUXpx3rAsc4hECl4AaABAg.AOIebs4inJjAOIuYzVy9Ha","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugx4eUXpx3rAsc4hECl4AaABAg.AOIebs4inJjAOIudvD-h_O","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugx4eUXpx3rAsc4hECl4AaABAg.AOIebs4inJjAOIw6rXclT9","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]