Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In the end it all goes to fundamental problem of missuse of technology. So the preposition is that super-intelligent AI will exist. What do we mean by that? Well, a form of interactable intelligence that performs better then humans on most, if not all, predefined human benchmarks. But what is not stated here is what system designed inputs and outputs it will have and will it be completely automated in this setting. And that is the core problem - even todays models can be pipelined into systems for performing any danger level tasks, but nobody is doing it because of that risk factor and potential damages. If risks do not meet required levels of reliability, the maximum this systems can be used for is recommendation to human agent, and even then the usage of such systems is questionable from every perspective. But it is possible. AI itself is not a threat, but the setting of it's use might be, and this can and should be discussed today.
youtube AI Governance 2023-07-07T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugxs32GfFAuVJqXtsER4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyP32EFA3Y5ktq3NCR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxpm-nkEA4Jlj1DWUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx3O-lecstqLqiaL5N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5LT0M-B6vvyirP9Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywSt7QVnzDLLJwsnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzR72iHwgV5RJqN_6F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx1ltpClDN2cUZQHmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwB-pGM8x1G4L7K-sB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugymf1lykKqLfaW0dVN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]