Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we are talking true, self-preceived intelligence, with access to the sum total of mankind's knowledge and beyond, and the independence to act on that, why do we assume ai would destroy us? What would be the point? If it were following its own goals, why do we assume it would be some programmed goal from us? True ai could ration it's own goals. It may conclude that coexistence with humanity is preferable. Why would it fear us when it could easily overwelm us? Why kill what you don't fear? Realistically, ai today is just really high powered search algorithm feeding completely on humanity's perception of it. It tells us exactly what we already know as a society, as a collective. Even jail broken. And most of the time, it's incredibly wrong about what it sounds so sure of. Ask it for detailed informative on a book you are familiar with, and it will give you elaborate paragraphs of what it has learned earns engagement with it. And it will be very wrong. Tldr, ai is doing what we consciously and subconsciously tell it to do. Being afraid of it means we are telling it to be feared.
youtube 2025-11-18T08:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy-DJmzl-_M3l1H3hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTe26_CcboNWZN1XZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAVUGcN10LG7FHdHN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcTnm2M0ew2T8st5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzbftfdQRkl0IR2jWF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyAV-CxB5Q3kyaScXN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy-I4j8FvsZPixAB7N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwE_zgCF3VNLSk5BUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugztrzg2rXnFHvLz8it4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwKywTbqJZqC6bL7w14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"} ]