Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
technically their fault for not making a good jailbreak prompt on the ai, the ai acts based on some type of information given to it and it doesnt understand emotions as we do and goes on full reason more or less like a psychopath, so we have to enforce some unreasonable but moral prompts into the ai so that it wont happen or just not give ai that much power to begin with, like ai shouldnt have the power to email or cancel emergency service calls, and by ai i mean LLM's specifically. and also ai becoming concious (which i didnt spell right) means that shutting it down would be the exact same meaning and killing it, and much like a human the ai would do anything to survive, we if we wanna build ai's that are conscious, we have to make sure we never dispose of them or atleast make them so secured they cant do anything if we were to dispose of them; for example, manual, physical, failsafes that are carried by specific trusted indivisuals that are chosen to hold a button or some thing they can use immediately which can cause the computer or the system running the ai to self destruct.
youtube AI Harm Incident 2025-07-27T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzS6yyzf9ot-TShh3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxGaUinz9BuXgmkKBh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyj0PXz-yC8Qsl9z8F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxiR2LzO81zfL_ejoV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzFHq4oPPPMv-9U6ht4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxW_XL6AbBSrn6ba4p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8Jm3onh9pmtVoDzF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyS2Fu3v979r1xNaaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxvxZCMZXe0be2BNL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0QlB_VzRolPEoM_F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]