Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The more powerful a tool is, the more dangerous it is. If a table saw injures thousands of people every year, the more so LLM. That doesn't make LLM bad, but you must know their risks and use them wisely. This balance of usefulness and danger will never go away, even if OpenAI implements some sawstops here and there. We, as humanity as a whole, just need to learn how to use this new sharp tool correctly.
youtube AI Harm Incident 2025-11-09T22:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz5e4gDWmYWMDGfK0d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyQSU-zKvJif4AB0et4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyXNjYbjTQ2y2NlSlJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAoAe6CCIZnW6DWLZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx01Tk4XYJFgJw8ZUl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxLoA5eVUDH48UCRld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMyIG9MuPalcC-9l14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGzr_xDqerMLXhfkN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwlK4owhbFt3gPu1D14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx1WNzhzMN-9oapu5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]