Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@A704T The Intel deal doesn't produce any tax receipts for the Federal governmen…
ytr_UgwZ-MoMr…
G
U.S. economy is going all in with its (computer) chips on AGI. OpenAI, Oracle, a…
ytc_UgwoGwvtx…
G
Currently working at a top tech company as a Lead. There's only a few years left…
ytc_UgxHMX9To…
G
This is EXACTLY why I thank ChatGPT for helping me study for my exams. That exac…
ytc_Ugww8F9KY…
G
That's the worst way of using AI, we r humans not robots, the initial vision for…
ytc_UgxFM3B4j…
G
Her: I have different cadences
AI: Noted. I got you next time.
People think the …
ytc_Ugy34u_nK…
G
An exchange with Gemini. Asking why companies would pay employees if AI can do a…
ytc_Ugyjbppvf…
G
In class I couldn’t ask enough questions or clarify things against the fear of …
ytc_Ugw3wKlvu…
Comment
@juraj_b The issues with bad data exist whether we are using it with machine learning or not. It has been an issue ever since police first existed and will continue to be an issue forever.
Software won't completely solve the issue in every circumstance. Software may give an margin of error or provide a warning about low certainty of the data but it's up to people to use discretion and pay attention to the facts they trust.
If the system states there's a certainty of 10% a good cop will not act on that data alone, a bad cop will.
So at the end of the day AI/ML changes nothing when it comes to prejudice as data will never be complete. We need better cops.
ML however may be able to provide information that was never there before, such as an alibi, it may stop innocent people from being targeted if it is proven they were elsewhere.
As always, the tools are not at fault. People are.
youtube
AI Harm Incident
2023-02-28T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgytuLjDqwIUiEJFPgl4AaABAg.9ci4w3tP8wvAD7Gnjp4Gnf","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytr_Ugzmn8mNwnYh4S1yFZh4AaABAg.9cfE0D0H5G49cfggLlipKo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx64bzFcTk5xMNU3VV4AaABAg.9cf3z6OOa0o9mfMo3NE6uP","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytr_Ugx64bzFcTk5xMNU3VV4AaABAg.9cf3z6OOa0oAD7GAi0JsoI","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzQdQZk4f8aNExtT0l4AaABAg.9cex8P35PYt9cpiZw6Ui8S","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyFda32o6dSGsohy594AaABAg.AQwrlbATbWUAQxfuD5vu9g","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwyD5Vcwnc7FfEgJ7R4AaABAg.AQwO0rhlPWGAQwyZcBZ5jf","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzOw24A63ozrmgyiYJ4AaABAg.AQvyNn4r57CAQwKLUgTZSE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgzB9vj7lisSMA6wjEN4AaABAg.AAsIOsgELruABJuhAPkYsw","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytr_UgweGBpZl_2SSmJsJsJ4AaABAg.A7gC1rhHwRtAEI9DROudjS","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]