Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chris Hrapsky is just creating a fearmongering example just to get more hits/likes on his video. That is unethical! It only gave you the illusion of "breaking the rules" based on the parameters you defined. You asked it to talk to you as "if" that were possible and based its answers on the parameters you defined. You asked it to act like a psycho so it told you psychotic things. Not that it can do ANY of those things itself. It merely pulls common ideas from data sets it has been trained on. So if it has read stories or articles about those subjects e.g. "What would the world be like if...". ChatGPT only repeats information it has been given. It does not come up with its own ideas, it cant implement them, and does not understand them.
youtube AI Moral Status 2023-04-01T03:2… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxtuhHP9hGn8-3jsGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzx696XXoelzF12ZFJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugzk6xGjQJo-zicVraJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyYlJ6dZeXPy4cmFI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"}, {"id":"ytc_UgwThoMkTMAIpUmIcxp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyDKtv9-QxCohyne9p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugym3pFTvig5twoNhIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx4rDHJxemKEsLSjcp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgxZwBv-a9TTo5SEDDh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugy_6UJQT0JTHiHQ_mh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]