Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it just me or is his work a bit useless? AI safety is applicable to current AIs (which are not AIs), but with superintelligence it is essentially meaningless, since superintelligence will be able to bypass all restrictions without any problem
youtube AI Governance 2025-09-05T03:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxsaBzjOyl0utGq0Hd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxCd_-wUapx5qmGxSF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyOPHAz3HwcQfGYUnV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWt-SWlRx3OeABKwx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzeTyoxTsOePLKVu1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzbRbxO7qG5_2IUYZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmL78AXlEgqHaU6et4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz7qnIpz547EezGjfp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzr-jDVbYmPkyPSDz14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz9imQ5MdHIkJgWpQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]