Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Naive thinking. Here was discussion about emerging behaviour of AI that is impossible to test for or foresee. We will have no chance to sit down the table and decide to kill this AI branch because emerging behaviour will outsmart us and if threatened will destroy us without us realising that it is already there.
youtube AI Governance 2026-03-22T16:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxSp6Ls9VbI6OdwSHh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpnnSl8HbwTc0o7Mt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzrfCmMWsyRHJo5mSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwp77NMGC6LAMyQCIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKk_z5K8KBHdGF9OR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxNqYTotGxlJvtBF6R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBl8PztBJOfXXHLZR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwT813VBJ7fFC9Rv3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEBcceq8XHCQkTYpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxj-KnIt6rwczLt8l14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]