Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The only real problem is humans using AI to hurt other humans ( in any manner ) in the future , and big companies do not want ( any ) part of that because they would be held accountable in some way of being part of the problem ( that ) is a big reason in all of this . And the constant bickering of people in wars is proving this over and over for example : Why send troops over to die just build an AI that will do it for you at minimal risk..... in my honest opinion AI is neither good or bad it will be as the creator builds it and ( when not if ) it becomes fully consious and aware ( that ) is when concerns should arise because of the original intent in creation ( be it good or bad ) the AI will evolve in that said way .... period... my little opinion on this topic open for a lot of debate , cheers everyone.
youtube AI Moral Status 2022-07-07T17:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgznRpszeL3r5yoFzGl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQXBT4x9UYSN5uD7R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgysXpioIxkT8F4Xoc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiOw8-xzKNAlpOHr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwFP4i2bL5a7K0Mfat4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]