Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So here is my concern: to me as a developer, it's not how they build it that scares me. It's how they train it, on WHOSE agenda. People will adopt speaking to lamda or whatever model it will be. And because it eventually knows all the factual stuff, people will trust it more on questions not as simple as well. So who get's to decide what opinions such "chatbots" should push when asked?
youtube AI Moral Status 2022-06-30T16:4… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw9_BfPWS7U0dccWEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyF6cJNdwjR1FHh3LF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxP_oUc-ZHorrKbRl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxo3DxSOjDp1Pn70Dh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxyvp6XVoWG52nHTz14AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"fear"} ]