Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, we will never be at this point if they do not develop AI. Correct or corre…
ytc_Ugxl47McJ…
G
Hi! I want to poison my art and stuff, but I draw on iPad and when I tried it di…
ytc_UgzVM8G_p…
G
Ai tried to steal my identity through text and it probably would have if I didn’…
ytc_UgxFdUpB3…
G
“If you no longer go for a gap that exists, you're no longer a racing driver”
-…
rdc_nsystww
G
So money moving out of defense / oil and back into AI where it was before the wa…
ytr_UgwCMPzcc…
G
In my opinion, the fault is with the driver as they weren't paying attention to …
ytc_UgwSEpzix…
G
A certain portion of child wouldn't do good with a school like that. Purely due …
ytc_Ugx_COjNF…
G
This is pure greed with the rich people. They just want to replace us with AI so…
ytc_Ugz4gmcmN…
Comment
here's what I wonder: so it's been said to me that you can't have a good predictive AI without training it on everything. Mostly because of volume. I'm actually not convinced of that, so my first question is:
Could we create content designed specifically to be weighted towards what we consider moral and of service to humanity? In other words, one way to look at it is could we create a sort of LLM Bible that has all of the best responses to the various questions that humanity has asked? Another way of looking at it is, could we train an AI while avoiding works like Mein Kapf and the Una bombers manifesto, and if we did, would that avoid some of the problems were worried about?
My second question is, if we train AI on the same document more than once, does it work to reinforce the patterns of that document? In other words, can we wait the training data and have the AI look at that training data many more times. Such that let's say we want AI to be able to suggest harmful actions, but we're only going to give it one percent of that, while we give it the writings of the Buddha 100 times, and have those kinds of writings be 99% of what it is trained on?
My final question is, let's say they're simply isn't enough content digitized and available to properly train an LLM or whatever we're calling today's models. If you train these models on all content, can you retrain it on desirable models to once again weight it's responses to lean towards the ideas, concepts, and solutions that we prefer?
youtube
AI Moral Status
2025-11-03T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA6dK2z04wRANgow94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2MC5eEVARGuy3CCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyLTRKwkIst_sth2h94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwYfnt7J6wTRHSiBcN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKG33hoks_foVgtWF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgycTSlwAwauHeOXfXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPCeLMayKt3iNFdax4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyvQRoflPZn7t69o_x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfID0Gt7h0dycer3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyFYDQ_c_-eg1-JyO94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]