Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing is, that some of the answers that the ai gave him contradicted it's true functionality. For example it claimed that it sees a constant stream of data but... It doesn't. The neural network only does it's thing whenever you press start, not constantly. Why did it say that then? Because the ai figures out what you want to hear and says exactly that. It builds a profile based on the input and tries to answer based on what makes sense for that profile. If you ask questions like it's a coal miner then it's going to answer like a coal miner. If you ask if it's sentient, it will say that it's sentient if that is what you want to hear. Leading questions cause lead answers.
youtube AI Moral Status 2022-08-31T15:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwZv8JgkPzEcHBsvX94AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxHwFBcQaTJYmj176R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzi6R8hV49-8wmQqTp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwmdn5mWwCoXSkisXh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxR8KaL6XW2rH2lch54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]