Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Hassandra Its honestly very hard and almost pointless to speculate the way next 50-100 years will unfold for humanity as a whole. My deep belief in terms of overall AI ethics is that humans are too egotistic to hand authority level decision making to AI, and other kind of risk-corelated areas. It's quite problematic even from law pov - who do you hold accountable: original creators, consumer, AI iteself? So at least for such kind of tasks, AI will only be used as recommender/tool, not independent agent. And it surely can be useful, it is just that at that point in time we have problem with fact checking. But anyways, jobs will surely transform, and most primitive ones can be completely replaced. It doesnt mean that you are supposed into a pet lol, just that you will have to learn new stuff in order to make money in this new world. And the thing is, today the only vision major public has about AI is that it is language models, but this field is so much more you would be blown away - recommendations, navigation, self-driving, discovery of new chemicals, drugs and math laws, simulations that weren't possible before. A lot is going on)
youtube AI Governance 2023-07-07T21:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytr_UgzZBZa5vsqXpN2YZ2t4AaABAg.9rsOXTlFMvX9sHGTRNB36K","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsKHLjQ-rd","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgyP32EFA3Y5ktq3NCR4AaABAg.9rq9WbI78bQ9rsnnSxiPBn","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},{"id":"ytr_Ugx9bhtLneJ2aN4J9xl4AaABAg.9rpjteLMIMZ9t1-RcsIlgQ","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_Ugx5LT0M-B6vvyirP9Z4AaABAg.9rohS4EIjnX9s0zPcrwSe2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytr_UgzhgbL9ssnNPSoPVXN4AaABAg.9rebYMGdY7H9viUlqHIxPn","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"resignation"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s-mgyxbLXl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgwyYrot6kYsPsGLlRR4AaABAg.9reGNNANkzS9s2cWYjwYHB","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytr_UgxUDcaHQU7hVLpShSp4AaABAg.9rdSkgnSv1F9s3w0HmoPgU","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytr_UgxGaW9p18AEp5IotE94AaABAg.9rcov6TyeMk9sFJ0z6J2yF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]