Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is super interesting. On the other hand I think most of us who are at least a bit intelligent can foresee this. We all can come up with endless doomsday, Skynet scenarios what happens if it goes wrong. I don't need to be as smart as Mr. Hinton for that. And this was a philosophical question decades before any machine learning even appeared. The question is - will it happen? Is it technically possible? I always see Mr. Hinton as a bit too much tech optimistic in a way that AI will take over jobs and therefore shift the whole political system, etc. in the mid-term. I think the AI capabilities are largely exaggerated by the industry right now and we are actually quite far from that. My guess is that if you are in your late 30s or early 40s you don't have to worry about being hunted by a terminator. And by a terminator I really mean a killer machine that hunts you because the supreme mind decided that the humanity is not needed anymore. You can be very much hunted by an autonomous system used by some state actor during a war ... unfortunately.
youtube AI Governance 2025-09-22T11:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzjFjY_HkP4w5sVvl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsqQCw2gIncQwtOkN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz9NAOYf-FCo7e03vx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz3zKWZcHsjb8PR3Kl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwSUHckszLjszdkEgJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyAMB2s5GrEMnAyLIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKsObXpYaRTnJCgmB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw_RQgtXsKzxkAnfAt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJEI6bvlqmxV0_ns94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzdPCoFbCCoQsRWeGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})