Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An AI system, or real I system, always optimizes for a goal. Nature’s real I has that goal as survival and reproduction. Chatgpt has it as simply emulating human text response. So what would the goal of a scary AGI be? It must have one. There is no meaning at all to a default goal or general goal. I have not seen this discussed in the context of video’s like this. Why? It is only when the goal is significantly high level that its subgoals could be harmful to humans. That, and of course sufficient agency.
youtube AI Moral Status 2025-04-27T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz1NvKjZgqKF3WGISB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwFpxtDob8lVSba6bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwooZhiVKutNw3dX554AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugym7I122A3NGDIYYC14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzIp5fmKyHnbji2u594AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyn8kZQE1Z9vR0zrbx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_ixl_ZvqCSiNJQRZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyrHbyjyCijaqvDJmZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy_kEf8Aqetw4JpRK14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzOQdqcHws7CyWHZLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]