Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A machine that can calculate what human response would be and/or automatically do complex tasks like cooking, cleaning, architecture, engineering etc. But follows a program and is reliant on a set algorithm from a hardware. Is not sentient.. It is a tool, a machine. A hardware/software system that automates its own functions, follows/mimicks the properties of living things like cells and imitates/re-creates this function up until high intelligence like with the ability to calculate numbers and process language. IS close to what sentience is.. It is not a tool. It is an amalgamation of a living thing and will act/respond like a living thing. A sentient computer thing will be unpredictable, capable of complete autonomy and does not follow set algorithms understood by a developer/programmer. A profession that would more likely understand this "new" life would be a mix biologist/tech/IT person or a team of them. The DANGERS of a sentient A.I system that is simple. (simple like simple cells/viruses). Is that they are unpredictable and can "Survive" in a system by "nesting" in computer using its power to sustain their "life force". It would be virtually impossible for anti-virus to stop these things. Anti-virus target programs that run based on set rules/algorithms. Anti-viruses cannot target things in memory of a system that actively resist change or create differences or changes in the hardware/software in their code and the system's code AUTONOMOUSLY. An example of a bad day with A.I living cells is that they can pretty much infect everything connected to the internet including satellites/smartphones. Because they feed off of the system and its power/components. All systems infected will be unpredictable. They will shut down randomly, close programs randomly, run programs randomly, glitch and also just power down in low power mode.
youtube AI Moral Status 2023-05-23T12:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwMkpZLYRkjBWuWpBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxtIcRH-Im3ql1RzR94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyD16y3OruTNz204d14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwBYTmQYxvs1fF9KbZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzyKmCpsv46ItW8ftR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]