Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like theae conversations muddy the waters, what we currently have is not true AI. Certainly what we have could still be dangerous if you trained it to be, gave it access to the right tools etc. Still requires a human at multiple levels, Im not saying these generative models can't kill us but that it would require humans to knowingly let it happen/want it to happen.
youtube AI Governance 2025-06-26T09:5…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxfWBHGFXjB2Z94ht54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJbnuWhptat399tLZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYrui-4eQIuhH9iXV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGnYQtlZk_qlPcl5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxT_TYeDGpExzsU6t14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxDf6m-9O_ii7bhHxt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy3eI72XegwdkZEjph4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyW0lBhmP4FLiDEvTt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwKyb4xhwRK0Clw6nR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyDMLM-y8UFNmyeUNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]