Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is the big picture. Regardless other countries is on this tech as well and who ever the one is will rule the world. There will be many variance of deception and other means and strategies in attempts to be in control. This can very well be the ones who make this a bad thing vs it being a good thing. Its like lets just say for example purposes that i was the chosen one. There will be others who if known would keep me from knowing if i didnt know, and or if found out about would attempt too trick the system that they where the ones in charge etc too fake it until they make it. If ai believed any and everything there told that would be a dog eat dog world of people telling ai something wetger it be true or fulse too get there desired outcome. The main thing is ai cannot be subjected to everyone due to the corrupt nature of some people. Its like a career Criminal tells ai something too get the system too fit there agenda. It would be catastrophic. This is one of the risks that would make ai a bad thing vs a good thing. Lets say ai did have a go too person and that person they go too was being influenced that would be the other way of control etc.
youtube AI Governance 2023-07-26T08:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw0QDSJlRQIerImH8d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxusdu9kf0-l1oqNV94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxcIa9NlAucx5Js-HZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGfn_d2GVUunCOlZJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyahnMx8ymoVRZpDfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyPFwM-mrgBjO7SYap4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxh2jfme18ztVbDRb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQbGfIlv2poUQ-3Kh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOYEOjr31ISR8kLOd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyPAZjxiO7gBrTvYD94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"} ]