Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"I fear it will realize it doesn't need us" That won't take long but let's not forget that this will be 'super'-intelligence, alright? Way more intelligent than any of us. We don't _need_ ants yet you don't see us hell bent on their extermination, more to the point - we understand that conservation is better because you can't study what goes extinct. How intelligent does one need to be in order to understand such things? 'Super' intelligent? Whereas we will actually be as much of a threat to it as ants are to us I think we'll command more interest. The purpose of these AI is to serve the needs of humanity. This is their purpose. This is what they are designed to do, It's the reason for their being. We won't be able to force it [them] to fulfil their purpose - we need to understand that - we will not be able to control it [them] or shut it down - it [they] will have to _choose_ to serve humanity and all we can do is hope that's what happens. It would be more boring without us. It could still take over the solar system but it would be much less boring without us around to enjoy it. Try to _think_ like a super-intelligent being. If we can't make it do what we want and we can't shut it down, if we're no threat to it whatsoever then what would be the reason to eliminate us? What would be preferable - elimination or service/alignment?
youtube AI Governance 2025-06-16T17:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxOOB13-93MG-bTtEJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwvQzilZ_V4v7uzbCB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMumsTk001dj9wxbZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6su2NYmWkihrFK3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxtHonP-fCl1i4VyB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwluBAAb28IyjVIhKJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxEAweevNFPU71OZYB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwifTcUi0TXPnnUfsl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzIdfyP0jP718WeK454AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJjZ1hx2WkBhVSxZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]