Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
when you try to create a brain that time itself a person should realize its too dangerous and he is saying he didn't realize that, this is hiding something. In order for human species to survive we destroy planet and our own species sometime to favor our interest and if AI realizes that in order for them to survive they have benefit certain few to their survival and then destroy that to survive.
youtube AI Governance 2025-06-24T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzEzoB7zVMKt4B13Hp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxdOhiOncJkp_6Dy1V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy_G_vByiveMaa8wgx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwJscgrnF5U5iKe_4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyaVT1LTkBgboI7jz94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzMqaS1_k1vZ-bo9CZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzB1YG486ZM9Rq-zdB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPTRTp2-TIKItcJz54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx--zBQ_pS5VNhUrhh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz9VsW8gQ5Wo5kVgXN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]