Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:37:25 no, we need to manually insert an Infinity into the motivation matrix of AI when it comes to any harm to humans.
youtube AI Governance 2025-12-05T06:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxWv4e-30XyWh7rASx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyGDvrlOElx8GAvdXF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzBVkPjaUxrc_Asrm14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzN2E1yodJqW4dk_s94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKb9gcv7x37yI2MF54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxdjc2N7hjP8rZ75tJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdTQoitg2MPpmEZWp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwbSAgXZUULG42cxod4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyz7s6qT5_yRFYl5EJ4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxYRUAT0KdD9kjkvxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]