Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Guy asks robot a trick question, LaMDA responds with an answer so generic it sounds like something Siri would say. Yet the guy some how knows what LaMDA was thinking? How is it possible that he was able to know that the robot understood the question to be subjective? How do we know these people aren't full of shit? You can program the robot to say anything. Wow, he asked it a trick question and it responded with something that a Star Wars nerd would say and he thinks he discovered sentients in a robot. What a joke!
youtube AI Moral Status 2022-07-17T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzIulRhAFEf5GW4g3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMsN2PsRnAkAYMmwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxMXHiXUNhjhuQ-5eV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx00qZ8jSZNF5TwDCh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzA1N9x0TfohtdmZe54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]