Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Summary here about AI: - The good of AI in advancing our lives heavily outweighs the fear of one day being dominated by Ai robots killing humanity. - AI only knows what we already know or what could be sourced over the internet. to outsmart an AI all you need to do is to keep on innovating with new ideas. AI could not do that. - AGI ( Artificial General Intelligence) is like AI but the difference is it thinks like a human brain. But it won't be overly important because humans could already do that. - AI computing is useful in Medical field because of how proteins are shaped at a given time and with such fast computing could calculate all the possibilities so the doctor could make correct decisions instead of having to calculate many different variations of the outcome. Same with Astrophysics, AI could calculate the movement of a Star based on the calculation of 100 billions of stars moving at the same time...human could never calculate that. - Same with the industrial revolution, where horse riding industry gone in within 10 years replaces by automobiles/ locomotives, AI would make many white collar jobs extinct but also opens up new industries catering on AI.
youtube AI Moral Status 2025-07-27T13:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwuIz_JIB2wwL-OZH94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzJnCrviGed1-IiEw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugynucoks-FhbrXibMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxu14ywDNNhHZR2WnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyDy01d_yjJtEA9zRF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyUN7IIP2mbsfJKJeB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy7oG9lmMR0PnVUAX14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyzYjO1bU-qA12euBt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwueXC8tMiKJfWyOAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgzzgufQiCtRmKqONbB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}]