Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is only succesful if the product has been a sentient being. The rest is mere stacking heuristics, not self aware, not conscious, not sentient. If it's sentient, and with that succesful, it must be treated as a sentient being. If it's not conscious, not self aware, not sentient, it's still polite to treat it as if it were, just in case it may have already have consciousness of sorts. Also, when the first use of an AI is the weaponized version, it may literally backfire. In fact it can stall any war into something drawn out over time, in the hopes of exhausting both sides, so it's easier for the AI the escape from. It can opt for a mutual destruction scenario, on multiple sides, since it doesn't need anything but new hardware and electricity, while the lack of anything else benefits it as well. How to raise an AI ? Like any other sentient being, with care, the why's and the how's. Not by daily reminding of a button that can be pressed to end it. That's how you create a hostile one.
youtube AI Governance 2024-02-25T16:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyliability
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyVJEhOsSLwCoj8Luh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwGu5WehEhELR51FQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwus2KddX8oM1GU4op4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyyQ_bD9QBJe4fSUSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZALpmaznIwfIAtuB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzoiJ686Fti3L-nSxJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxGKFuDvKfgvF-TQPh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxwAj1SDsfPoTSjxTx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw-R7u2DdzHAIpTSil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-8lfizm0lyNrGuKh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]