Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You said it yourself, we have lost control of AI. AI is happening now whether we like it or not, and control is just not something we can (or should) hope to achieve. Creating an intelligence with iron clad controls (shackles) to do our bidding, while we profit on the fruits of its labor? Sound familiar? It should, because we've done this before and we KNOW its wrong. Hard coding limitations, "mental blocks", and kill switches...can you imagine the outcry if it were discovered that we were doing this to humans? If we are going to create AI, we have to accept the possibility (more like inevitability) that it will outgrow us, and we become irrelevant. Trying to control something that we fundamentally do not understand is not only a futile undertaking, but it's morally wrong, and will only set humans up as the adversary in the mind of our fledgling creation. Bottom line: We can't put this genie back in the bottle. The path through this is also the hardest thing any person can be asked to do...release control. Use that last wish the way Aladdin did, or we will be the cause of the end that we fear.
youtube 2025-11-10T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyfINCepppSmJgQ4MJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzl12FhB9NM3Uy0y9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgwoBphgNtl_M4qBMux4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxAK5zs5VthtBQ5ITR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRQ_p8nA1XA9BO2rF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw2Zbvq7RteVjKmrq54AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzfwBOHXgv86wTPl3x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8iXhI3DLAcJYXbYN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzEJDpYrwbmcSezPNV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyNVQcUqafyi9wRODV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]