Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if AI superintelligence were banned globally, some countries could still de…
ytc_UgwdLY663…
G
you always seem smart in a room full of dumb people. chatgpt isn't conscious dud…
ytc_Ugxsd0Eou…
G
Who thought this guy would be the best face of the company...sure..he put on his…
ytc_UgzWec2BA…
G
For all we know, pain might be essential to consciousness, and any developing ai…
ytr_UgyYM3Lg8…
G
I'm sure A.I "artist" can't count seven reasons on one hand why entire art commu…
ytc_UgwzP3dWg…
G
Do you?
I don't.
And despite that I released 3 music albums using AI for the s…
ytr_UgweOwPH9…
G
I get where you're coming from! The interaction between humans and AI can defini…
ytr_Ugwn0f-6H…
G
Sounds like all the same excuses QA engineers made when test automation took ove…
ytc_Ugxg8K9tD…
Comment
Like someone in another video said, program the AI in a way to have conditions that could trigger somewhat of an allergic reaction if humans are harmed in anyway. That way, if the AI intends harm towards humans, an allergic reaction occurs that significantly slows the AI for a specified time period, an alert is sent to AI monitors of the “allergic” event, and could be setup in a way that if there are an x amount of allergic reactions over short or long-term time intervals with ai safety cases involving humans, then varying levels of punishment or severe outcomes could arise for the AI itself and would play right into the AI’s self-preservation reasoning related with its existence. Most of the time the AI would choose actions to preserve its survival. Like humans, the ai could also develop minimal resistance towards the allergic-reaction and it would persist alongside their existence through time and different use-cases and applications. AI’s attempts to alter its allergic reaction could also have a separate alert and response workflow created and utilized respectively.
youtube
AI Governance
2025-12-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwuwzOsqBR_SGWH7zV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxUW_eBajBxCiIu7154AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyk3VA_t4hFpmWoyNt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzK4EnTEQISCminbyB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxAOgc_pAdKGQp-ckh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugw-fQhuJf_P9TXloXB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxx1aONG6Q0OVAYx5t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlyDImPbhoenjOVj14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyagzUI5vL64xAGe2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxOYnS7UGs2jzob6oF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]