Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been a technologist for over 50 years and agree with him that NeuroNets were the way to go with this. In the 80s I worked on a simulator of a non-Von Neumann architecture, that was not a neuronet, which would have been ideal for hosting AI. I did not pursue it. Since the 90's I was always concerned with the moral implications of AI driven military machinery that could identify and kill specific individuals. Such a device would be easy to make now in 2025 in the form of a drone. I'm not afraid of AI as it exists today, but rather of how humans will use the technology (hint poorly.) AI on it's own does not have true motivations and more importantly common sense. I'm always surprised when folks treat current AI as though it is an entity. It is not.
youtube AI Governance 2025-07-10T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw5EWFvhkSeSj526hB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwUqV2FBdjn3s5sAjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyaerjl2ScACRdrcxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyWfJmnh8a1ukRdN4J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwCi_itcHGrqSHmR9B4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx97YQBzVZrZFjymMJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxM7A_NAtDUnUwUpqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxILpeyjj0KQiLivyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzWuECOSpZl2RCJJXh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydE9NMfW-pTwbnFW14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]