Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Easy to say do what you enjoy when you're a billionaire. Average people need tha…
ytc_UgzPbzg-b…
G
That “improvisational actor” framing is a palatable half-truth for people who wa…
ytc_UgyP_E9zB…
G
Except, AI in the end is only a very good database that is very good at comparis…
ytc_UgytapOvl…
G
This video has AI generated voiceovers, which is ironic. I wouldn't be surprised…
ytc_UgyATA0jD…
G
Q:How do we even find meaninging in life if the a.i. can do it better than you c…
ytc_UgxPahevx…
G
i defenitely appreciate your determination for art and stuff, but AI expansion w…
ytc_Ugy9IZggt…
G
It is just crazy that we are destroying ourselves for some meaningless AI bullsh…
ytc_UgyBXxBsJ…
G
Would I be okay to reference this short in a video I’m making about AI? ❤…
ytc_UgyHfeYuC…
Comment
The more powerful a tool is, the more dangerous it is. If a table saw injures thousands of people every year, the more so LLM. That doesn't make LLM bad, but you must know their risks and use them wisely. This balance of usefulness and danger will never go away, even if OpenAI implements some sawstops here and there. We, as humanity as a whole, just need to learn how to use this new sharp tool correctly.
youtube
AI Harm Incident
2025-11-09T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz5e4gDWmYWMDGfK0d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQSU-zKvJif4AB0et4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXNjYbjTQ2y2NlSlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwAoAe6CCIZnW6DWLZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx01Tk4XYJFgJw8ZUl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxLoA5eVUDH48UCRld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMyIG9MuPalcC-9l14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzGzr_xDqerMLXhfkN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwlK4owhbFt3gPu1D14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx1WNzhzMN-9oapu5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]