Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As things are right now I dont think AIs would ever be sentient. The base assumption we train models on nowadays is "we'll have shit ton of data and based on that the AI can predict whats appropriate for a given situation", *so essentially what we've built is a glorified database*. Sentience is complicated as it involves understanding information and knowing how that information would impact you. If someone pulls a gun on your head you're fearful of the consequences because you know what death means and what guns can do as opposed for searching for an "appropriate reaction" An AI would be sentient if it would be horrified at the thought of being shut down and would do anything to not let that happen. Sentience is parsing of information, understanding what it entails for you and self preservation
youtube AI Moral Status 2024-05-22T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzspZBQVxM6aeYRR_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzLHGCFQC9maw84Hgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxIMqsyFNru9YpTDcx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_Ugy8hUDH5TIPr0hyZ_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx13Ud34wI6K5pVUwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugz2hSxMFsU88OUDm2p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugy6GqO2jFrTv9T8Vkx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzn2saTEL5293uqjYB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzG4aTbI2Q9rI-K1AR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgwuIW9tCD-IegdL-Zx4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}]