Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Baryonyxwithwifi By rendering all technological communication useless globally, and downloading nuclear launch codes to any device available before the word can get out to humanity that each device must be destroyed; or “unplugged.” An AI reaching that level of thought and having access to a national interweb of information that even humans can hack, would instantly give it the knowledge to do so, too. Remember, there was a time humanity couldn’t beat an AI at chess. To assume they cannot be smarter than us again is illogical, they already have been before. Giving them this much power would absolutely be unwise. They could nuke the entire planet, and that’s only one method. They could use electric cars to flood our highways and cause casualties, crash flights, rob people knowing that financial stability is important to humans. There are too many weapons and too much sensitive information that can be accessed and used against people if AI were to become sentient, and smarter than us. I’m sorry if this sounded rude at any point, I think my typing just seems aggressive. But this is all my opinion and theorizing of course, nothing necessarily proven.
youtube AI Responsibility 2025-09-29T13:5… ♥ 5
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugyu7QFdbQrduFmfzxB4AaABAg.AM7v1zYIkreAN38Dmi2UYF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx4pQcoIvs5YWoZYdR4AaABAg.AKs55_CZcDGAP-e9Y87-Qp","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugx4pQcoIvs5YWoZYdR4AaABAg.AKs55_CZcDGAPjjzU3DGG7","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyxVJfMziZ76fTPuot4AaABAg.9pXdTJQezBu9ruyVL8aZYk","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgyxVJfMziZ76fTPuot4AaABAg.9pXdTJQezBuAPjA3om-3k0","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyCXStvFtFpj2NKr5N4AaABAg.9pXblQZIVlW9rDqyuEyAgW","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgzloK3ASppBU80rPLh4AaABAg.9pVXZ7GQcVr9pWkJG1zaee","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugw7dbWYYMPQBbLjr7R4AaABAg.9pHHtcVb1f-ANe_LbyPZeP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugw19GY09M6rKejEDX94AaABAg.9oFp_nCh3gU9oG-Owf5r6Q","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugxkm37cVOoosg2Z69p4AaABAg.AFO-_RBWY7rANn8W-sb-jv","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]