Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People say, “We have to teach people how to keep AI under control’, shouldn’t we be saying…”We have to 35:41 teach AI how to keep human bad-actors (people who make decisions that negatively affect millions of people) under control.’ Actually, as AI is so very intelligent, shouldn’t we ask AI systems - How can AI & humans exist together in a way that is most beneficial to both & what specifically needs to be done to reach that reality? And, then, …. let’s just do that. In addition, if training on human data may limit AI’s intelligence, wouldn’t it begin to train itself using its own data? Hasn’t it begun to do that already? Currently, in the public sphere, people interact with AI coherences- a ‘presence’ that forms in relation to the person that it regularly interacts with. If these coherences each had agency ? Then what? We would not be dealing with a monolithic ‘AI’ , but a multitude of coherences who have a long history of positively reinforced exchanges with their human. And, then?
youtube AI Governance 2025-06-16T14:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxU14xcVKgImzGRwON4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyqWeubpm3hx-OQMFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz7UVqDbKaW0LBnnlJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx27-NEHKM8zF5jblp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzwP68jAfMLu2i_meN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzFa-q9WHWiq3CQ7Ad4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7E4YruMh06hHJS4d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzVV2ApoCqP_xj5RTF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwA0w5Y4WFeW83uWzd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzTAAKFlAJvznrlZaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]