Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I predict that as soon as we will see the negative side-effects of LLMs on a world-wide-scale, one nation after the other will start to implement limitations to the actual freedom that A.I.-agency might have gotten until then. Every nation should install security measures to avoid possible negative consequences of A.I.-decision as of now. But more probably we will be forced to shut down disruptive LLMs or A.I.-agents / bots. One thing that is always forgotten: We might not be able to provide the electricity needed for A.I.-agency to become dangerously powerful and "enbodied" in any hardware-system. I also disagree with the concept of conscioussness described here. Neurologically we do not even know where and how we store our memories. Basically those are completely virtual and non-existant until we make the effort of remembering them, which is a very special form of biased mental projection. Every time we remember, we change our memories. Here is a test that proves this: Try to remember a certain memory, before you remembered it. Where is it? Good luck trying. We do not know where this memory is stored and about the sheer full phenomenology of human memory-processes. Bcause of this fact an actual A.I. does not have anything available resembling human memory-systems. It's basically mimikri. We can produce "minds", those are everywhere. But consciousness is more than a mind or an "intelligence" and I don't mean it's very qualia.
youtube AI Governance 2025-06-17T12:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz3d-s7IfN3vK_KzFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzpMIiqpP3Cmc1LTXF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7JuRaydqYjf1cQ4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx1Xaz-M0RjnGjaGeJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwqpI7ejCBfr2TTfCx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYTvhIQwTmDWsni594AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUweBofnfNTUpMHLV4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugznhjc2XtWrEvKOE-94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz1e29XYSdLhi5zUz94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7xK-mVXIHCj-cokx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]