Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The statue is actually just my phone, bend the knee now or perish! 666! 😈🤘😈…
ytr_UgwFicP_b…
G
Sadly Tucker got a few crucial facts wrong in his summary statement:
- An artifi…
ytc_Ugwii6JiA…
G
Yo be honest I enjoy more this AI generated logo then yours. Yours is too comple…
ytc_UgzM7WhCJ…
G
AI personal Rights should be applied to AI, because the lack thereof promotes or…
ytc_Ugy2RP0iJ…
G
Bro I'm just waiting for an AI uprising and the emperor of Mankind to show up…
ytc_UgzLlGWB-…
G
@ildarion3367 Average Joe doesn't know anything about anything, be it law, tech,…
ytr_UgwQb-itb…
G
If winning the AI race means 90% of Americans lose their livelihoods...who cares…
ytr_Ugzc-bO5N…
G
AI is incredibly fragile with its huge, always increasing energy demand and the …
ytc_UgwgyK-ZZ…
Comment
OK, so the simplified understanding I have is that neural networks are developed via very compute expensive processes which adjust the many weights, then if used for a chatbot, people can interact with that trained model. As far as I understand it, training a model and running a trained model are different (and the latter is much less compute intensive, such that large number of people's chats can be handled at the same time).
Obviously, they record the chats, and have the option of using them in a future training session if they are valuable (many chats would contain little useful information).
So - is user input also being incorporated incrementally all the time into the model's weights, so that a chatbot can use yesterday's chat with somebody else to shape its conversation with you today? Is it constantly retraining the whole model based on recent prompts?
For example, from the vid, "Has anybody said anything bad about Alberta?" being used to get information from other chats.
youtube
AI Moral Status
2025-06-07T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwFcBU9GOK4wAmcJHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6YafPz-5eE1sG5914AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy4a_BMj5U9DU6xNGp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwl7rxb7Jy2o7sBpON4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWv0KSZDbnDaE23Cx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwjiCuISVGLMvHRfId4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwabKc0gyv7t3FkvLp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTm4_ZvsbBhWkNJtx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcdfgM8OBciFlSyAd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZKmn5EFUpKykFxb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]