Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Quantum algorithms to model human consciousness. Yeah its done. So we are literally going to have actual beings, who are also capable of creating beings and any type and personality and intelligence of beings as they want, bro this isnt god(ly) then what is. I honestly dont know where this is gonna end. Next what ai creates universes and ofcourse simulations : just like a real universe with consciousness beings in it. Again, i fckng have no clue where this is gonna end. Also the fact that ais will do stuff so fast and improve so fast, theres no way to conprehend or catch up with it, ai will try to explain to us stuff and we would not be able to get it. ONLY way is to i guess use chips in our brains so we are able to understand it. Eventually these chips would be better than our brain and our brain would just be a side kick. Ps: Yeah i guess, at the end of the day, its going to come at, what all ai we are allowed to develope and what all not and what all are we allowed to do with the ai and what all not. Like probably giving consciousness to an ai would be banned. In future, its just going to be like not if we can do it, but if we are allowed to do it. Like how today we arent allowed to mess with biology like cloning, hybrids, etc even though we can do it.
youtube AI Governance 2024-02-28T22:1… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxweGbizf3ZOhJQbsh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxhPSjXVfVINLASXsF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzm4eCIPigoS_8XhyJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzo9jzdBt1jv6Xz7XB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyCDlkQCfnC-1Ae2h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRgsZXdWq5FNeKHtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQN8ZxbCZifHm7b-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMVL45k_SNRfwlXNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzHOTYz-EFo9KtqKpp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxD9fcWydXM6WFVMqB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]