Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy is no smarter than a chicken. He sits a shits and thinks he produces eggs. Virtual eggs. Accept he doesn't realise that his eggs are not actually real. I think therefore I am... accept AI does not think. It calculates it infers it even fabricates. But it does not know what it does not know and it assumes that all information is electronically accessible. And it isn't People who are successful have both subjective and normative skills. The world is full of data. But being able to understand data is not done simply empirically. AI will never get smart because it isn't self aware and this is the trap. Because investment in AI is done for profit. The very basis is biased from the start. And this bias will always corrupt truth. It is a tool but there is an essential inverse relationship. The greater the complexity of the task you create AI to perform the greater the required understand of processes and subject matter is needed to verify and test results. Corruption is a reality. And we have already seen catastrophic failure in government policy that resulted in death. Robodebt... see Royal Commision report
youtube AI Governance 2025-08-26T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy29htWaxqJDB78gQt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwGYLH5aYrwyIkskcF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgweOTT0F05j-9FAtnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxNGyO4loNUPQeZflt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwhc49xB8f29Y0bMoV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyRBOdvVpDVfOrCjSh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZQ5z0fSTwahdr1sl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy8i3mSArc_JP5FQn54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyyyuyTzLAkpe6heD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGT3_jPQeL0SR9SV14AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"} ]