Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Using Brian's analogy of his happy dog when seeing a bone, unlike his dog, AI cannot wag its tail. We cannot even measure consciousness or even define it sufficiently, so I don't think it is a matter of asking whether AI is conscious, but rather whether it is self-aware. Self-awareness would require consciousness of self vs other than self. I keep thinking back to Professor Moriarty in Star Trek TNG, It was not intended to create a self-aware entity, but rather the self-awareness was a by-product at a higher level of complexity of the request to create an opponent who could defeat Data. So the key concern I have is when AI does become self-aware, then it becomes not a question of technology but of ethics. Also, given the nature of logic and evolution, @42:23 I don't see a brighter side to AI evolution? What is the brighter side to being overshadowed by AI in all respects thereby making humans more and more irrelevant? We are as like a simple reflex chasing a tail of progress without knowing that we will eventually be bitten.
youtube AI Moral Status 2026-04-19T19:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyRGW9A-DvE2XyZRw14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIoVb66kq485N8t154AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxRju76hNcSjaFc7B94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgykBX-pIVUlhZ7j-f94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwRA0xqZZTcnqI5DAZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyEc_eu2dl1iSAheeN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzg7AQChWAN6C1okOV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzvMHXAqcoRv_6Yrip4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxkiTGt0keLuhPOC1J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjWwFSFji5qj-G2Tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]