Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here’s the problem: there is a direct financial cost when a student fails a seme…
ytc_Ugy0pdh6F…
G
Teslas self driving is waaaayyyy far away from anywhere close to being fsd it ha…
ytc_UgxidkO6I…
G
@hotpufff123Not exactly. You see for an Ai to create art it need to train on ot…
ytr_Ugyhu89Bd…
G
It's like the psychology of an incel, except instead of believing that you're in…
ytc_Ugz8o2fX5…
G
I wanted to become an artist but seeing all this ai mess happening makes me not …
ytc_UgyCbDRHw…
G
.....So do we think its lying, because as of recently we have material proof tha…
ytr_Ugw64J_ls…
G
Jobs that are becoming extinct will be any jobs driving. Robots will fill those…
ytc_UgwR0uYfC…
G
If Walter White and Jessie Pinkman had a baby, it would be that dude in the shir…
ytc_UgyEkEVGB…
Comment
There was a post on Reddit legal advice UK sub from a small business owner who had used an AI chat bot for customer support outside of business hours. It was supposed to give info on product availability & price only, but a customer managed to spend a significant amount of time “social engineering” it in a conversation until it gave him an 80% discount. At which point the customer ordered £8k worth of stuff. I mean, it’s an easy legal fix, the business simply cancels the order and refunds whatever the customer paid. But now the customer is threatening legal action that the business owner *may* have to defend if the customer bothers to go through with it. The business will win, there’s no question, as all they have to do is make the customer whole, ie, put him back in the financial position he was before ordering and paying, there’s no legal way to force a business to go through with a sale as long as they refund in full.
The general advice has been “if an employee did that, you’d fire them. So get rid of the damned chat bot and manage your out of hours via email/webform etc unless and until you can get a properly guardrailed chat bot system and even then, don’t trust it.”
youtube
AI Jobs
2026-02-06T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOAGYJqQJZNXiOqoZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzXVKeBSdHAzqhF0LJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwK7_ixZnc95ZBySAF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzL2OsFjkgsgyWLlMV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxTso8uwltMTMJE8Ct4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw86jtv3GeyQ6cLah54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxs2wBy4SwHgETWNhF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgygJMC0qAmr_mwoajZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgygVW9ZjADTypr5Dv94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwrasYhq_7QB2Uq7zV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]