Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that if taken seriously this is hypocritical for Elon to ask. FSD is probably the biggest AI "experiment", and it is ahead of GPT in the very critical area of being able to plan ahead. I suspect the motivation behind this isn't to actually get them to pause, but rather to draw more attention and scrutiny to these things. Even that seems a bit unfair though. GPT seems more human than FSD because its primarily about language and language feels like it has to come from a human like intelligence, but FSD imagining in its mind what possible futures might be and choosing one seems a lot more like a biological intelligence to me. GPT doesn't know what it's doing. If you ask it something that requires that it knows where it's going with it's outputs ahead of time it falters, because it lacks that critical imagining of the possible futures. More to the point, what exactly is going to plausibly happen in 6 months that will change anything? At best judgement day comes 6 months later, at worst it makes the difference between Microsoft getting there first or someone like the CCP being able to catch up. I think effective regulation via some kind of industry association is unlikely, and unless every major player is involved and complaint that's worthless. As for a government regulatory body, does anyone with two brain cells to rub together honestly believe that letting politicians and bureaucrats meddle with it will lead to a better outcome?
youtube AI Governance 2023-03-30T05:4… ♥ 5
Coding Result
DimensionValue
Responsibilitycompany
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwnGb7ARF5fU5iZX5F4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxy4-SMjENRpv1ZBRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyffh6HQOXSFG0kqiN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwINfpgQa9_dARlLAV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyTB3WW0fTYKA-HiDV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzGmamRiBEZOxEgKjF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwprg8qtmJ--8LVHEx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySXCvbDM5_ckFAq1p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzgJ0gtnD9pDdz8F1h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxGnePZa64AFsqJFaN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]