Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
oh i just see my previous comment was delete! so il will not name compagny or product x). For ai or for human, can we agree knowledge is the same? So how can you define what knowledge is true? For the capability, actually you have 30 million of react project on github et try to go in ai studio, ai make mistake on the trusted knowledge (a working code), how can you imagine the succed with data you can not trust? The context of ai can go 1 millions token but ask him to rollback on a previous stage without git. The ai can not do it. And finaly if the super intelligence is the breaking point, why make little model specialise and the tool is usefull ? I think we are human and love to think we are special, but what is the difference between me and a robot? I consume power with food not gasoil or electicity, i move with muscle and electric pulse like robot, and my brain transfert electricity to think like the robot. Taking that in account, how can you remove the new tool we need to be more "intelligent" like the JWST of the equation? So the ai robot or us are we not limitate by the same things?
youtube AI Governance 2025-09-04T19:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyvZeECT9c464h87dZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAY6f-uz7MCKJbI714AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySCRwfedtmsNNNvwt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyM1K6QzB_S2xlh16p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOxWGt8PXfpXPPEGd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz-Kgqr4RduMecPIKB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"unclear"}, {"id":"ytc_UgzYONPyonEtcuHlnNt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyIaJuUacWZKUVls2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkurUmtmXPf5eCOxd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwEL3PGeYXq8GYHkBV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"})