Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, is he telling us there is no way (at present time anyway) to teach AI to recognize harm and remove the risk of AI further developing anything it recognizes as harm?
youtube AI Governance 2025-07-14T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyPG2vDXaihuJ7VefZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugws2l-WckR1OPMB22Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYSiTuQhJNRN5sZnV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwflwvMMqrQUNdYcc94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0YaZRxXrg3gy9-dF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzU8DqHK7hBVDhHrNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwy8CK-hJ4YFUP3wsN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxDfWvyQXJs3-4Dgnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz3ERBoMO7PKDOhPX94AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz5uVbhM38PrICYre14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]