Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would like to compare this possible scenario of destruction of our civilization to what could also happen based upon the “Star Trek, the Motion Picture “ which involved an android that was created by these “machines” in space but these machines were basically self learning based on the one if our Voyagers that got lost when it was sailing past the solar system from our era. In this scenario, the conclusion of this “singularity” was that the artificial Intelligence that was self created by the machines was looking for its creator! And the android came to realize that its ultimate creator was the human being!!! Because of this realization by this android that consisted of this artificial intelligence that the human being was the ultimate creator, the android then wished to fuse with the human being!!! That’s probably because the android or its artificial intelligence also realized that it lacked the ability to reproduce and perhaps self repair like biological forms can or as human beings can! So in this scenario, I definitely see a much brighter future from AI being intelligent enough to realize that they may require us to fuse with them in some way as to perhaps become immortal if we ourselves become that intelligent and be able to self learn like the “machines” were able to self learn after fusing with this artificial intelligent being in some way! At least this can be one bright and optimistic side of artificial intelligence and what that “singularity” might possibly be!
youtube AI Governance 2023-04-18T20:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhWgMcjzj5HBDIkYx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwKMuHVvv9aMOiEov94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwECzxABKtqY4onFnZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxW4fOFrXQSV9ZKYih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzSARiRwUHD9_PF4Cp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzvXg0Plhq3PV9X6hp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzovkEnOxbJnxwNWwN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwA7YMquLYCMVU6Mb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvHwElTU5bhoLNppd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw5DP-z6hq238_Hh_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]