Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eventually there’ll be no one left with money to purchase because all jobs will …
ytc_Ugxfc3Q_X…
G
Ngl i see more people using ai music and act like they made it then i see it wit…
ytc_Ugw16UZ6M…
G
I regularly get LLMs to gather data, then write and format things to save me tim…
rdc_nm1k4kv
G
I won't deny that I've used AI to generate art before. Nothing I'd ever post pub…
ytc_UgxHtTe04…
G
they be having fun making ai who can generate art using other people creations, …
ytc_UgyaPGNt4…
G
All you need is a logic bomb to foil them or 30,000 volts to reboot em and while…
rdc_gs5whnt
G
There’s over 20 nations and thousands of institutions working on AI. A six-month…
rdc_je4s1bh
G
Am sure u r only using Claude! What is coding ? Coding is a series of tasks need…
ytc_Ugyr4KHm4…
Comment
Commenting before watching, because this is an evergreen relevant comment about AI advice like this:
In engineering, we have a weird and counter-intuitive concept: "almost perfect" is worse than "pretty good". The reason for this is because if something is _almost perfect_ , you can get complacent, assume it'll just always work, and get taken by surprise when it breaks, but if it's _pretty good_ , then you _expect_ it to be wrong in some way, so you pay closer attention, you double check it, and you generally confirm that it's not broken.
AI advice is in the "almost perfect" category right now, and that makes it extremely dangerous, because that makes it so that people follow it _blindly_. Always remember: trust, but verify. Don't just assume the AI is correct. It's frequently not.
youtube
AI Harm Incident
2025-11-24T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx46HsdO5vB3f3on0h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfH-pFbFfS4mB2aDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjngIgVcdaWcn8-aJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwORH6fT1daDN0207V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxz6f_Kiag-g-7EInp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwooL8oW3IFRvo7QXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZe4AzSx1e5hOwZK94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyKpQ0-yopz0ZFUWqR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVOVcdoXAtM06Ro3x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZb5tB5jvL0bdi0YB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}]