Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would say that i view AI from a technocratic, futuristic, expansive lens. End result, i respect Tegmark's views a lot but value the impossibly wide array of unknown future benefits for humanity even more. And given the hundreds of serious challenges humunity has to face up to, including humanity itself, all the more reason to urgently forge a path forwards with AI. I am concerned that AI could be used to control and manipulate people in the near term. And whether open source or closed is better here i'm not sure. And exactly how to prevent a single person from creating bioweapons etc.. also a question mark. Perhaps practicaly monitoring and controlling resources required for this, using AI, would be one step.
youtube AI Governance 2023-07-04T03:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwTjak25URYjjUSaHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwYZxy_MuFCWovZcHp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyPZK7sBFowZ7W59KZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzWoKTW1XVC3hRck-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzV3WZVP4EPhBf9YO94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxMS6qzDPPh9v7z9V14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9QSOyAiqYzgFI_EB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyi_gkRyn5QFEohOMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzhgbL9ssnNPSoPVXN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_bJsdj1oBgFlOo194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"})