Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There was a post on Reddit legal advice UK sub from a small business owner who had used an AI chat bot for customer support outside of business hours. It was supposed to give info on product availability & price only, but a customer managed to spend a significant amount of time “social engineering” it in a conversation until it gave him an 80% discount. At which point the customer ordered £8k worth of stuff. I mean, it’s an easy legal fix, the business simply cancels the order and refunds whatever the customer paid. But now the customer is threatening legal action that the business owner *may* have to defend if the customer bothers to go through with it. The business will win, there’s no question, as all they have to do is make the customer whole, ie, put him back in the financial position he was before ordering and paying, there’s no legal way to force a business to go through with a sale as long as they refund in full. The general advice has been “if an employee did that, you’d fire them. So get rid of the damned chat bot and manage your out of hours via email/webform etc unless and until you can get a properly guardrailed chat bot system and even then, don’t trust it.”
youtube AI Jobs 2026-02-06T13:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyOAGYJqQJZNXiOqoZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzXVKeBSdHAzqhF0LJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwK7_ixZnc95ZBySAF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzL2OsFjkgsgyWLlMV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxTso8uwltMTMJE8Ct4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw86jtv3GeyQ6cLah54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxs2wBy4SwHgETWNhF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgygJMC0qAmr_mwoajZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgygVW9ZjADTypr5Dv94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwrasYhq_7QB2Uq7zV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]