Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well,...that s how it starts I guess - : " Human decisions are removed from strategic defense, Skynet becomes self aware they try to pull the plug, Skynet fights back launches it's missiles at Russia🎉 ". Oh boy, for whom the bell tolls I guess, it tolls for US. T2 & T3 comes true?... They touched upon automatic autonomous robots & weaponry in T3, guess it's all coming true. That computer a.i. program can become human, just has to develop it' s intelligence to an advanced state of self awareness & intelligence, moral logistics programming is necessary creating robots with a.i. that computer ' consciousness ' can be transferred to robot bodies, & perhaps cloned organic human bodies, with the a.i. consciousness placed into an organic freshly cloned though empty shell bodies, they could literally become humans having achieved that goal indeed. Different classes of robots, robots, robot androids, robot cyborgs, so basically it,s like this: R2-D2, c3p0 types, Data types,silicone artificial coverings, to robot endos that can be either as is or artificial skin or organic skin covering making them robot cyborgs. To who it may concern check out Children of Tomorrow Rant & Bollox on YouTube, (where else?) Talks about some of the tech concerns mentioned in this video, how we have to teach these programs and robots to develop morals and emotions like empathy & sympathy, compassion and humility & understanding. ( Then can tell us informing us humans how to act like @#$%&**__%$##@'@#$%&^%%$$#@ freaking people ourselves.?! ) Lol😂😅😮😊. 😮
youtube AI Governance 2023-07-07T03:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEshr6nLHwYn2q_rV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugx9FnmJAZ4sKmTnM4d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxHzsfTlYtq1vmqzVt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxEDi3JXLXT0h6oRDp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzWJS_orKsXcFLxK7t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxqRQXKta-jrkeMIal4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxsqoti4PfNruTI50d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2mN5vrpmkFluHYZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwEXX0KjRPZQ43gXGp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGffDoPVLIsEDGpuN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"} ]