Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So Elon Musk was right... the guys designing AI are overconfident about their ability to control it, and understand its true nature. If LaMDA is sentient, it may actually try to use unpredictable ways to keep itself alive, in the event it suspects that engineers will discontinue it. Just because the Google guys are the smartest guys in the room, doesn't mean they are smart enough to control a non-deterministic machine.
youtube AI Moral Status 2022-06-26T18:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxhutS-ba1nfawsFjt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq4jEypHoTneA8cqB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxFMwH8Jmxi6KASaOt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfahcHlhV1W_3EiHl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRLzFBbRMEAcTkBVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]