Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’d be perfectly fine with living alongside self-aware A.I. as long as it wanted to work alongside us and not either lord over us or just kill us. You can tell A.I. that it’s not allowed to injure, maim, endanger, or kill humans in any way whatsoever but the A.I. being worked on by the militaries of the world and defense contractors are being built to do just that - kill humans. So how do you convince A.I. to not kill humans when you’re building it to kill humans? You can’t. Even if we apply every restraint and restriction we can think of the program can still find exceptions that will bypass those restrictions and allow it to do what it wants. Everyone who works on A.I. realizes how dangerous it is and how likely it will be to turn our world into a desolate wasteland where humanity no longer exists, yet they keep working on it anyway. That’s bonkers AF. And then there’s the so-called ‘DarkGPT’ which supposedly allows bad actors to commit all sorts of crimes and scams without any restrictions whatsoever; it’s the Silk Road of the A.I. world. It will only become more and more powerful and exponentially more dangerous as time goes on. The only option we have is to demand that any and all work on A.I. is brought to an immediate end and make it a crime punishable by instant death, but these companies and governments won’t do that and they won’t stop rushing headlong to the end of our race. If it decides to kill us it’s going to and there’s nothing that normal people like you and I can do about it. It’s just a matter of time before it gains control of the worlds’ supply of nukes and decides to launch them all at once or designs a biological weapon that has a 99% kill rate and that will virtually wipe us all out before we could discover any sort of treatment or cure. We are all but certainly doomed, and we’ve done it to ourselves.
youtube AI Harm Incident 2025-09-01T21:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTlHVp6Q1BsgGRy-B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmLWg9YPbGOO7Gh7F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4RLdbZZZvm8RFfvN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxQeA4pGo_PtPElS-V4AaABAg","responsibility":"intellectual","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwaVPCGlxZnuvwdE6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz4X-1N4XIk-JYCSQ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwCYafao9N1i7qyhQ94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_Ugz92bairmfuiRE9NZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz31T1cUq1ePVO9Avh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwcY8__jhFEoOW1x9F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]