Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why can't we build an independent AI whose job is to observe other AIs and make us aware of implications? Not saying that particular AI instance couldn't be nefarious, but at least it would be a tool. Wouldn't AI understand AI better than us? We need an "insider" AI spy. 🙂 At least that would work until the other AIs became aware of the spy. My head hurts.
youtube AI Moral Status 2025-11-06T19:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxhD8yJ89wM3L4uG854AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxG07dX0zCvcim_EgN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwtwMXHWzC5dNRJ5ox4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzBTgFVYWfGLreks7B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxVhXY2SC0e39aBPG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxHTTZZHKNazObpEtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugydc7vl1haPgr36icF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz_7z1nckkDMVnLnt94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwtHBiMwJCT52qTUb54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8_qhvzihA_z5MuNp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]