Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lex doesnt see what he doesn't want to see. Granted this was/is still a better interview than Joe's... Hidden capabilities ? eg Take a single shot rifle and attach a bump stock - now you have a automatic in the hands of a maniac. Lex can't see what a maniac sees. The wild increase in power is due to the combination of the mind of a human who hacks the power of the model to achieve vastly increased power in both. eg Bot attack (Ddos). I just find it bizarrely ironic that Lex, who worked on building autonomous vehicles, and fellow developers and friends are all working on Ai Agents, aka fully autonomous vehicles ( cars, trucks, trains, ships and planes), and yet he pretends that we have not, or as yet, have not developed such tech ?? And actually Lex's constantly strawmaning his argument - the title of the discussion concerns superintelligence being uncontrollable - but Lex smirks his way through this discussion by saying none of the current narrow AI is uncontrollable, and hence superintelligence won't be either, and if it does go rogue humans are still smart enough to destroy it. FAIL. I gather Lex plays devil's advocate for the first hour and then stops being a dick. Thanks Roman - in the end you managed to completely expose all the slight of hand the 'magic' Ai show promoters use for the illusion that they are trying to acc. "Technology is a useful servant, but a dangerous master" ~ Christian Lous Lange ( Nobel Peace Laureate)
youtube 2025-07-26T03:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwprATfFV36HDtMryd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzQQD1DH02Ch4ywd5F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyvSfnbJpdRu6ptCHR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx1ZTEOhLM3wtuZjAB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxkGC0CE_7Lt4DWmxR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxGDwPcMoRiQbeUhAd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzQ9Db389WW2yzzBCF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgymQt_83X-2JdfliQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5b_1ODkaHnfvmbMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwIxh9EARldj4G_Aep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]