Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The dangers of these llms is already present, and much is being done behind the scenes to try to get a handle on the possibilities. Yeah, biological walls and alarms are built into the current chips used for ai, but 2023 chips and older don't have these safeties built in. There is a critical danger here. If you had a handful of bad actors, funded by an entity capable of providing enough, a small group of people with a private offline AI could end civilization. The equipment needed is out there for purchase. The DNA can be outsourced in a way alarms don't go off. Give this group some humans to use for testing and I assure you they could create catastrophic viruses that would collapse civilization. Yes, this has been realized now. It just this case alone should be a warning. This could already have happened, and it would have been fairly easy. Things are being done now to try to monitor the proper channels for evidence of the possibility, but still. This shit is so dangerous in the wrong hands. And that is the question here. Who is going to control this, and should they have control of this?
youtube AI Moral Status 2026-02-08T04:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzJrN25Teyc-btld014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzhD0gSRExJAwS0zah4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy04Sg04fwpiuLQ4894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxaYhBPs4zE_99anzN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwwuENLS4A5s89JlVR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9F7Le4mP8wIJOi5N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyTXgAVrJyhNmWFsIt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwFYtpcIqNRv9ueuzd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyo4GC6ns59ypmOq-N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxoezlHXCPGTlCG_dh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]