Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dear AI Chat bot. How does a Nobody in Sydney Australia, convince this insanely …
ytc_UgwwKhkRf…
G
The title should have been: A Chinese startup just showed the world how incompet…
rdc_m9ff21t
G
Is this AI. I don't think robots are that coordinated yet. I'm pretty sure they…
ytc_UgySQM2GG…
G
Lol.
Wonder if this their way of having the memory cartel save face after the…
rdc_od412hi
G
> I wholeheartedly agree, what use is alignment if aligned to the interests o…
rdc_m9jphet
G
How would ai become superintellegent? I think it's likely to do what humans do. …
ytc_Ugy-Qpe_c…
G
If i see that robot i wil punch ✊ him and I will slap her head hehehe…
ytc_UgxopcyWv…
G
The whole point is to lower costs so products cost less to make. But in reality …
ytc_UgzRxljvn…
Comment
4:24 - I work for an e-learning firm, and my VP decided to kick out all writers and researchers for ChatGPT. What resulted was more inaccurate information that I, as an editor, have to flag down and an SME still has to check. It's irritating.
If this system was entirely handed over to AI, we would be publishing wildly inaccurate content.
We have a food delivery service here that has entirely removed humans from the equation. The AI only tells you that your food is coming in xx minutes. There's no way to get in touch with a human. If I didn't get a delivery guy assigned for half an hour and contacted actual humans, they'd cancel the order and refund my money. Now I just have a bot telling me the order's on its way. How do I contact the company if a delivery guy drops down unconscious outside my house? I can take him to a doctor, but I can't sit with him all the time. How will I contact his family if the only support I get is "Your order will arrive in two minutes" and "I understand but I can't help you with that"?
youtube
AI Responsibility
2025-10-10T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpXL_DHu-27znxXjR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyhSeqx6rT3qMGGXN94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyh1w_1_zyVl-Q1d2J4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxqx3TKt19eTR8H8_h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyxzsEn_0DcVuM8T4d4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy_SjaptTiiBPUuwQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzAS2DgJnmGZYIYmgJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwajUsn1XG8EYvgVPl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwkbOlq5XYueMH3iZh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKZmMP1qdC5cu6pCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]