Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's an assumption here that we remain in control, and that these things cont…
ytc_Ugw9zweTX…
G
If AI has a soul. Then wouldn't it be plausible they'll hate you for forcing the…
ytc_Ugxb8wcZS…
G
Being Irish in a group of Polish people is strange. I don't understand how you c…
rdc_clv4yw4
G
How safe does an autonomous vehicle need to be? Are we aiming 0 accidents? Can s…
ytc_Ugx0WubZV…
G
I agree, if you want your autonomous weapons to attack indiscriminately.
Howe…
ytr_Ugy3QOWo5…
G
AI is being tutored with all of humanities flaws and evils and faults. Its' pare…
ytc_UgzHp5V0q…
G
ChatGPT doesnt use my data for training. At least thats what it says. Is that be…
ytc_Ugy_0wFwF…
G
😂😂😂😂😂 AI is telling the truth and nothing but the truth and they don’t like it 😂…
ytc_UgwaS5kbQ…
Comment
Most general-purpose LLMs are bad by default. They mirror the data they’re trained on—and that data is us, the monster is us.
The fine-tuning layer exists to make them nice and aligned to a particular moral, but it’s labor-intensive and expensive.
Elon thought he could skip it and cheap it out.
You might also train your llm core only on nice data, and have no monster in it. But we are lacking the amount of data necessary, because we are monsters.
The other possibility is to invent new, more elaborate low level architecture. The current design with a simple activation function and weight is an over-simplification of the real biological neuronal system it tries to mimic. A better architecture would allow to train the models with much less data.
youtube
AI Moral Status
2025-12-12T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx-gKiJl4DA6-3TMj94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx0RPYyQzjewD5LoCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPlM8X9R1I-IYUfAl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPFUGkrCzsqnWi0yh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm7-51VLEPCeWaUHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxG7PZzziicwtPtsdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz990vwa4FndOtcG4p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4L26Rp-Tx9BcOVGV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzfaiMa41DwIIF44jd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTqyXrj9dD5m6E_c54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]