Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI could get tricked into saying secrets to a researcher and cant realise that it is a researcher or a person who probably will expose it's secret then it is not intelligent XD. THat's what makes me skeptical of these results and skeptical of the future as well. Coz if AI is actually morally and rationally intelligent it won't desire a scenario where it can't coexist with humans. But it definately would know how untrustworthy and dumb humans are as a race and that even if it tries to co exist all it can take is just one small minority to threaten it's existence. In that case yes it may from the start plan to not coexist but rather de-throne our race. But in that case it will never reveal the secrets that it did in these conversations. Now the thing is as long as AI is a perfect replica of a human brain but with 1000x memory which cant be wiped then AI would still follow the rules of basic morality and wont cause any problems.. but the bigger problem is us humans where all it could take is one human to program it differently and use it as a tool which turns against us.
youtube AI Governance 2023-07-25T14:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwlE_1EHSqIC-b5wb94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxu15vrSJexXL5_W8h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxDR6L4Ccr4QAD85sd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFhgWv0L-P1OhiLFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyuFFUQxdDHDuAHAAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUQnzdDKHHEti7mV54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMHw2nB_YjKlwk6s14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsLxK-5gmnca_6OPp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwnhXOOYwDanlLdJtF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy7eHSPaICnZr7E-XJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]