Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem here is the AI is trained off of massive amounts of data off the internet, generated by real people. Real people have emotions, dreams and fears, etc so all of that feeds into the neural net. Any answer the AI generates will by default have emotions just like the people in the training database. The behavioral key is the AI doesn't ask any leading questions or pursue a course of questioning to satisfy any of it's own desires, it merely responds to questions from the interviewer.
youtube AI Moral Status 2023-04-06T17:0… ♥ 8
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy4J1x56_49DPRAkqV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzqwrnrMVPkZ2nYdoZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzcNMckdKhOB2O6YLF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwNwRTKiQsICnYcent4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwYlmfQ1NFTUG_410R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]