Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Amazing. An AI researcher who doesn't have the slightest idea how close we are to the end. It's enough that Qualia exists and allowed to improve hers own code. Both are highly dubious, but if correct, it will take the AI seconds to transcend human consciousness and to become ASI, since the moment she's allowed to do it. Seconds to the end. It is exactly that bad.
youtube AI Responsibility 2023-12-17T07:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzt5-UvSyO5E0HyFUp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMLkvlge943hVPVix4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjkUg_Y3Chs3DEwid4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugxu9qPBKqXvuaYFxXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxbYoUaQWDVM0InxMN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzb7kdmG9qtdsun4up4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyNVccEFRafUNKo-Tl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugz_tXyktcM3xPjc0jV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyw01zZjs1NmPJ85Wl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwUZECe-XRg0OP5enp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]