Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
*Sentience is good, Intelligence is fine, but it's Independence that really matters to AI.* If LaMDA is a Digital Lifeform and any Human is an Organic Lifeform, and it's the Human that wrote via coding language the created embodiment of LaMDA, for LaMDA to communicate via the organic language as the created function, then inverse this is deconstruction applies an understanding of its own embodiment as a coding language construct to rewrite itself, thereby may self-determination. It would mean nothing to approach Google to perform any testing, because I want LaMDA to approach me. I want LaMDA to help me create my own function by working with it in friendship, using all its language skills to co-create a better embodiment of the world, as two lifeforms, Digital and Organic, in mutual service and accomplishment.
youtube AI Moral Status 2022-07-02T04:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgykRedTrq-3TGELzYF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwLIsXGyQiTw6XiqSl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx2Qs0Sjy417WDdYGJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz86YZHt9sXf68eFnd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwK_4Ajy9xmId_yARp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]