Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In my opinion, the solution to this is relatively simple, we restrict what text data is fed to AI! If it's trained only with text that falls on the spectrum between neutral and strongly moral, that is all it will know. The problem now is we're purely feeding it as much text as we can, with a complete lack of discretion. However, AI also performs better with mass quantities of data, so sorting the input data would be an immense task. Maybe the solution is to train a smaller model on identifying moral text, and having it filter the data input of the larger AI models?
youtube AI Governance 2025-08-27T04:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy2sFO9gMXP5iBlB-h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy4otCoxcyQOrckVSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw29efcp4iGac6pNZF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyOOYnhUQbsniVAfOl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxiArPGzLjLsyh6b9R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]