Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a based and true take. LLMs are a probability machine trained on some good code and bad code, it doesnt distinguish between them. All it looks at is the probability of the next token based on the previous tokens. It doesn't understand logic. Relying solely on an llm to write performant, up to date and secure code is a bad idea. Week 1 of any stats course "we deal in probability, not absolutes."
youtube 2025-08-19T10:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyO2jj-psCEZwO7ZyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNuoqP9wnVDWLpbYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz9fwEas3Ih8kdWKQh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxIC-2GUT48rlUQDnp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzkpHfB7JAIbQQ7E1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxvGtCD6VKwGekf_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBHYCqi8RewwnX4-V4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyMuffj2D3KLXcw-kN4AaABAg","responsibility":"stakeholders","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzrFTDrj6HjocAMDQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzcxr2o7TvbaMv0ksB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]