Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Problem is A.i. is hype and could be a bubble, at Huuuuge environmental & societal costs. Or it could become uncontrollable and just take over some day regardless of intentions. People think this world/ system is bad, don't know what's possible (I don't either, but could become meh failed experiment, or radically alter everything as the internet has or Worse). People profit, think one tool can fix 6+ problems quickly and effectively. It cannot, it can either be great at 1 or 2 things (specializing), or generally mostly good at a lot of things... but Noooo thru limited oversight they could rush things to be way more attemptedly competent than it could/ should be leading it to backfire, as did with self driving cars ran at least 1 person over. Happens, don't see why it wouldn't with a.i. ... don't think it is Anywhere Near being what they want it to be, and people/ societies nowadays tend to overhype and be impatient/ profit first/ worry about inevitable consequences later
youtube AI Governance 2026-01-02T18:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwMkgH1DBteG3oQGSV4AaABAg.ARbhwLDOzGmARyprprABGp","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwMkgH1DBteG3oQGSV4AaABAg.ARbhwLDOzGmARyrL_RS1zN","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwMkgH1DBteG3oQGSV4AaABAg.ARbhwLDOzGmARz1FUUakXz","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwpML6inDx-CY5LE8F4AaABAg.ARbW9wvvEN3ARoAU1EtL-_","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxnqUQFe6VcqRCTYpN4AaABAg.ARbVpeBmG7JARnTDEurVCT","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxVgX9sbMRzB1Isva54AaABAg.ARb0iE-zeEQARbvqsd3Rqd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugzz_xDCoHNeZCzZLxx4AaABAg.ARaeu4FNEP-ARop61z_CqN","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxO0ZmWKcy28Q9LkYF4AaABAg.ARYcEwPfFJ_ARYeNAU7FO5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugw5hKVfcc5soYMlXGB4AaABAg.ARVtr3H99AbARVvvd8--U8","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugxez6VIMDzvUiyA10J4AaABAg.ARULmN6P_2xARUedeYQD3C","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"} ]