Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
before dropping my comment here who I am, I m a software engineer and a neural network engineer for 6 years now. so this is true, we develop most of the machine learning models like image detection and pattern detections but AI like chatGPT trained on 1.24T (trillions) amount of data so it can be anything but when it's come to a LLM (large language model) developers finetune them and put some rules on the main core so developers can manipulate the model as they want. for example chatGPT and gemini is more friendly while Grok is more flirty. but if we run the model without any finetunning or without any rules, now that's the situation comes upside down. for example in chatGPT they check before the model response is it nsfw content or not. it's on the core and that's why people can't brake it but sometimes some people manage to do that. in theory is if we run a model without that rules layer. it can be anything, and that's dark
youtube AI Moral Status 2026-01-22T10:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgyJxTwvrk_nnq0f7Mt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyPsc7-l4MCpx2ymat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugz7kz8dlw42wbRQ5S14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwGOADygqc8L-qMl7V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwMpC-PZBZT5mwpjoB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzJ190IKZpLvLWLSWt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzZiuw259EEA7ds75t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwNWwI7cOdQ1iHG6id4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1hJ65iKsTegvcJHd4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugww5RPRSXwetYx74Kd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]