Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thats why capitalism exists. I am a customer service and I do know what a personalized service is. But if I create a wolrd where my service is greatly needed, I wont have to be scared of the risk of losing you, cause more often than not, im not. Some companies thriving on 'integrity' will make their product better. Some companies wont. Hence if say a chatbot strategically reject refund, you as a customer has to stay with the product regardless. So if a company still faces a 'refund' challenge, then they are not doing anything to 'safeguard' their strategy. Simplest scenario: Comp A sells credit card. It monopolizes deals, campaigns so on and forth Customer comp A calls in requesting for late fee waivers. Agentic AI comp A dont care. Customer will think many time before losing the card cause how big the comp is. Comp A will then decide to have an automated late fee service from few parameters, and also will automate when waivers have been done more than 1 or if exceeded allowable amount. Refunds are cases basis - by means of investigation, refund is legit. Also, due to protective policy. - if not, then who cares. This can be implemented time to time, without having a human customer service. See, having a non-human customer is not only viable but sustainable. Not only about personalized, but on a business level, how do I get every one to be aligned at fractions of less cost.
youtube AI Jobs 2025-06-08T11:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyJcx5ZcF2wkMJxWG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxwZ-1B52NW9tsGyN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwVgamdXa3_gYzIcx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyI10_4IOmKfrLEs1p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxDbAtiYnRqBw5COjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXBmvg2twbYDUOisZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxDbs4MQpuUp77dH2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugx2BzMZPEhwZx-8QR14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxOx7LPwK6Z_0PISjV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyZaFeyz4hW2MRrpWZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]