Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is an excellent video again. I have been listening to Hinton for a while. He is the godfather of AI and says, like the guy from Anthropic, we don't know how the nodes and neural networks are actually formed. Depends on the data the AI encounters. When AI develops to general AI then all bets are off. Computer expert friends of mine say may never happen. If general AI happens then two things occur. AI is then working with concepts not just data and will be able to theorise which could be mind-blowing. Other outcome may be that AIs become sentient and have consciousness. Whether this then will lead to an appreciation of ethics and morality is unknown. Anthropology suggests not as these concepts and beliefs evolve over time as society develops. My guess is unless we do something as suggested here by Professor Dave we are screwed. But do we have the political will? Hinton said treat it in the same way as nuclear arms. We are risking mutually assured destruction if we do not. In other words we would be MAD not to agree to a non proliferation treaty for AI. Here's hoping!.
youtube AI Governance 2025-12-31T11:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgygwhgKpGvTK_SWzhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOVgE6WbYZjNyMY5l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxGm9vCPtEUbYZWLHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwtrUH9lup78L0Bigt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyCj2mGB9WMM6Kj3A94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})