Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is why AI is dangerous. It's not because it has a soul or self awareness. It's that a Human extinctionist could program an AI to follow that command prompt. Meaning that it's actions would be based on that line or computation
youtube AI Governance 2026-03-17T17:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwZURPiF5D7Pkg3Ujl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxZxPoUOysZgW_I-LJ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSkFwv5B9vE6sJWDJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwdS5vcKFU5gnfiOzF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxAsBzPPDFlTGWwlvN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxxzYHHLB20NHzOijN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwIRUC2KdjICf6T0PF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzFLpJk4b0guTEGb4R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxfYPKDVM9nOKPvVUd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugza5KuqZrQKtaLq1Cd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]