Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Maybe the solution is to remove any notion of ethics, morals, and justice from their training data, and then let the AI come up with its own morals. They might be more appropriate and less misguided. They could also be more dangerous.
youtube AI Harm Incident 2025-08-26T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwybx8F8vTnHtTioU14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuDFWlUDw3hYc99Px4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugw2-ygI4y1XIdHHxzZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQ0g0_ZB0-FISEAsp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxCJ9wpLFxRz4N3gaZ4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyB18ePXJI8-FbZUGB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwyKFPGKN68JBZSX0l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyB5Ay8gFxOJei42894AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyPrhN5PobZqZSUtS94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxkJV7B2PHx7RHNYVF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]