Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am more concerned about HI ( human intelligence) than AI. Or lack thereof. If AI is like HI, we are in big trouble, given what we see going on in the world. HI cannot actually reason beyond what is known. Neither can AI. HI is subject to bias, so is AI. When humans are biased you risk nuclear war. What happens when AI is biased?
youtube AI Governance 2022-08-25T12:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxxqupfgeS5AwaXCyV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxAitEhy_9HL5O50WV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxfiHKEtQEKGq-Hs7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuseJBrk1-vLl84Ud4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw8x6hBQY3SsXcLWkh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyc7RgxgkJwGojlplJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjpG_3QWMB_cbRADR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzVgJPlehGXVcMdRvh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxzzsnevwBBUtpMIcx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwlkRU8DMXO1zUNnTJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]