Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Of course it did. You are putting it in a situation well you're gonna die or you…
ytc_UgySt_vXZ…
G
@Jonathonson but ai image generation model's are'nt expensive to use, you can ru…
ytr_UgwJn_C8H…
G
Thank you for the great real-life feedback on what is happening in the software …
ytc_UgxKXCyAj…
G
Remember, dont hate on ai artists everyone. I dont care how much you hate ai to …
ytc_UgxjgNKHx…
G
> The economy requires expenditure of energy and resources in order to genera…
rdc_dcifsw3
G
If its faster and smarter than us, then we would automatically stop being a thre…
ytr_UgxC3h8s0…
G
Eh, the people who think like this don't really understand how art works once yo…
ytr_UgxUJ7qip…
G
Mask's don't block the ability for Facial Recognition cameras to ID you, because…
ytr_Ugz484TWj…
Comment
Re: How the model prioritizes, I think the word they're looking for is heuristics (common term in literature). In other words, defining the rule that says "Yes you trained well!" The biggest problem with training is developing a sensible heuristic. Model parameters and architecture affect training efficiency and fit, but models basically gamify whatever the creator says gives it highest score. If lying is the optimal way to score better, then it will do that. So if our scoring system says we want better similarity between the generated text and training text, then it will do that by whatever means gets the score the highest. Mathematically modeling correctness and humanness of text is not an easy thing.
(Disclaimer, I am a PhD student that uses machine learning but not an LLM researcher)
youtube
AI Moral Status
2025-11-12T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgylT8svfl2oMUW4U-F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyLfTtZm68taB7U9cp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxN-eoii1kT-akvwbF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyxl84xUl_8ihgo6Oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyI7zZSTLvif6F1Eex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_r8BM6oN8TggtiKZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-fujppI_piFchIax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwpWiqiuGSZK0WrC594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoqF7ccItFYIIxCMl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxj-Qku7l1HM-CHoU94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]