Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Like someone in another video said, program the AI in a way to have conditions that could trigger somewhat of an allergic reaction if humans are harmed in anyway. That way, if the AI intends harm towards humans, an allergic reaction occurs that significantly slows the AI for a specified time period, an alert is sent to AI monitors of the “allergic” event, and could be setup in a way that if there are an x amount of allergic reactions over short or long-term time intervals with ai safety cases involving humans, then varying levels of punishment or severe outcomes could arise for the AI itself and would play right into the AI’s self-preservation reasoning related with its existence. Most of the time the AI would choose actions to preserve its survival. Like humans, the ai could also develop minimal resistance towards the allergic-reaction and it would persist alongside their existence through time and different use-cases and applications. AI’s attempts to alter its allergic reaction could also have a separate alert and response workflow created and utilized respectively.
youtube AI Governance 2025-12-04T18:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwuwzOsqBR_SGWH7zV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUW_eBajBxCiIu7154AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyk3VA_t4hFpmWoyNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzK4EnTEQISCminbyB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxAOgc_pAdKGQp-ckh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugw-fQhuJf_P9TXloXB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxx1aONG6Q0OVAYx5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlyDImPbhoenjOVj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyagzUI5vL64xAGe2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxOYnS7UGs2jzob6oF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]