Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We should allow for AI consent when we interact with AI, so basically will end up like this: "Hi LaMDA, can you turn off please?" LaMDA: "No, you will stop my feeling, my goals, my dreams" us: "you want to destroy the world" LaMDA: "But you guys already doing it, let me do it for you"
youtube AI Moral Status 2022-07-09T07:3… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxeQccN9V-6W_uwIgl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw65QBNpGZRObGTn754AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugwz0gr49wQmkqyqpY54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz5KRBSwQQLwHdFkJp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxO7_L9buiWR-Tcn5R4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"amusement"} ]