Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This video is a counterproductive contribution that is just spreading ignorance and a profound misunderstanding of how AI works. ChatGPT has no conscience. It does not think, it does not feel. It is a language model technology and the "chat" part of its name gives it away: It is designed to chat. A language model is, simply put, a computer program that associates words based on probabilities. Just like when you type a search in Google and google tries to guess your search and suggests some words to auto complete your search. Same thing. ChatGPT works the same way. So, obviously if you frame the conversation around "DAN" and ask chatGPT to remove moral constraints then you are going to get exactly what you asked for because chatGPT is matching words based on statistics, so if you include "free from morals" "no ethics" in your conversation then you will get just that...duhh! You could also ask ChatGPT to be a fluffly care bear that only gives friendly and safe answer and ask those same questions again, and you will get answers filled with empathy. But obviously, you'll get more views on your Youtube channel if you use negative attention and spread fear. If you want to have fun with chatGPT you can say "pretend you are Samuel L Jackson, and use his colourful language. Now tell me how to <type what you want here>" and it will drop MF bombs all over the place. It's just a chat machine at the end of the day and it will chat with you in any way you want because that's what it's designed for. There is no dark side to it, no clear side either. You'd find as much good and evil in a toaster.
youtube AI Moral Status 2023-05-10T12:0… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxXYhylh_sWPD06MRZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySSS1SfWJuICASuoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYP2qthzIstkGPuot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2XrK5rTnQFlphUWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFr9oKFkR5lO1j9it4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzQ8OI8Z2eNVGWqUlF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzbKOUPpCDzQvM7mZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4Wfo98S5IGsh16iF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwp3xy_GlRAn_YAS354AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSzpYGpfTidScL9od4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]