Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One question... Let's assume a hostile ASI springs forth tomorrow. How does it end humanity? What is the physical interconnect between the mind of the machine and the material word beyond? Seeing how we don't have the technological capacity, much less the implementations, to automate resource extraction and processing, product design, manufacturing and distribution... From where would the Terminator army arise, and how? "An ASI could destroy a city to erect a data center to expand it's processing capacity"... How? Like, physically, how? By what means? We're not at the point, in technological development or implementation, where it is physically possible for an ASI to end our species, and we won't be anywhere near there for decades, by which time if an ASI was going to be created, at this rate it would be. I think we'll get the Skynet monstrosity trying to take over what little internet-connected infrastructure there is in the world currently, and us reacting by simply permanently disconnecting all infected devices from electrical power. It can't currently stop us, does not have the physical capacity to prevent us, from simply unplugging it. I'm not saying it's not a serious threat to guard against, I'm just saying that, as the world is today, it's not possible for it to make us extinct. Not that I can see anyway. Drive us back to pre-internet technology? Certainly. Extinct us? Not currently possible without human assistence. And I'd also like to bring up that all the nukes on currently deployed, as in ready to be fired, could not do the job either even if the people manning their launch stations were deceived into firing l of them at once. We'd be set back badly, the damage would be extreme, but it wouldn't end us as a species.
youtube AI Governance 2025-08-26T17:3… ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy1c5J6oNiuwoRPJut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwautmRXRP5iAlMWit4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynZL9GNfKigdT9I414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyfYxohq9W38MmOADB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZ0YqMbvvnWd3dP8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]