Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
38:00 why not make a super intelligence with its only goal being ai safety and throw money at that?— seems more useful for this guy to do than go on podcasts saying “idk what’s going to happen”
youtube AI Governance 2025-12-30T14:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgydfDnWByRMy8eOgqB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz79UJE8A2eCkDNfsd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyF5dTm6IrmP2F4ysJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwvMHn5CdU2mJ72r0l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwjy4xlBkI2SXv6YmR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHNZFA9hsMwEy431B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzjoGRxjjdc4n3GL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyD4X4itzgrIrquPiB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxg2M4bZRhr8CHb6MB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzo9wl5FiFUURlW3uF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]