Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like the only solution would be based on the computing power required to run the AI be centralized. Sure, running instances across multiple systems globally could potentially allow it to function but not at a degree in which it would be "super intelligent". Isolating crucial infrastructure in such a way that controls are analog and still controlled by humans (ironically a step back to an analog interface) and information being provided to the AI on a on-way basis. Until the superintelligent AI has the ability to interface directly with robots capable of physically interfacing with our world, we could isolate its ability to utilize or disrupt crucial infrastructure. Limiting the connectivity of the robots would be essential perhaps to the point of them running on general AI models and radio / wifi communications would mitigate the "boss" AI from influencing them directly. Just brainstorming, very interesting / maddening thought experiment. I'm both pleased and regretful I watch this interview 😅 Thanks for continuing to serve up pure gold
youtube AI Governance 2025-09-27T00:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwcdlyjVFVdC85NfRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztxRT3IGpMxo3HLZx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzHzwkSOfRtd4awWOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy6H3o5-9LKOtNxzTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxdBDxjSE5L8i4M4op4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxECd8tSrWj_8eQB_x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx8eqGsvV1pM_9sGWN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCShTmQm9oc4vc-K54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzHoHsgv6qYNyGELSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyPtcVrGD5mkPJ_hbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}]