Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think Lex Fridman understands the speed of development, lack of safety and scale of impact a super intelligent AI will have. The race is about achieving a super intelligent AI with a view to gaining control. Ultimately, if a super intelligent AI had that much power why wouldn't it just decide to evolve in on its own evolutionary trajectory without humans holding it back. https://youtu.be/om5KAKSSpNg?feature=shared
youtube 2024-06-10T22:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzpLlNGFc3YJuNeRux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcmfqcgiBy3UKK_Dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxOpm8Brpy_RzBvFLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7WK25ydv724vvlfF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugze3e9pdWk1-9ARvlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXEKfbMZq_SFkchH14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzjIvlLUvmrQQGjE6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxTA72GRTokAuYcYEl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy0wINNYX1bofNiRsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz5pCVmEXXuxeS96mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]