Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe k my words & those of others one day this AI will destroy the human kind .…
ytc_UgxgZxxUI…
G
Hey guys have you heard of Ecosia, the browser that plants trees, I think it als…
ytc_Ugz3i67dv…
G
Damn! After that grilling, I was waiting for ChatGPT to shout, "You can't handle…
ytc_UgwHIapwT…
G
Still, the problem is not AI. The problem is in the government system. That's ex…
ytc_UgwyO3Jrq…
G
@Irish91b ..You really believe they don’t break rules with algorithms and privac…
ytr_Ugw7bgkhT…
G
Absolutely! It's crucial for AI to balance efficiency with empathy. Sophia's per…
ytr_Ugz7zieZ5…
G
I can't wait for the time when our rulers are replaced by the A.I. Autocrat Eart…
ytc_Ugg4Od1C-…
G
I wish all Ai generators the same heart burn that they are giving mother nature.…
ytc_Ugwryg45q…
Comment
AI is developed by humans, who train the algorithms using data produced by humans; the system's efficacy is inherently dependent on its creators and the quality of the training data, which is inherently susceptible to inaccuracies. Unfortunately, neither humans nor AI systems possess innate capabilities in critical thinking or analytical reasoning.
youtube
AI Harm Incident
2025-11-25T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyBT9poAAMTZCikqcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgUe6Zwi3KFYjq-4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwp6itWhN9NK_yJWU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxLTecsnkYpLpPn0rF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgylpSPoKXckh-4WczZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwzQV3gRkI-5pjk4Nh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzQwY7JjRucGYFY1bJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzRnD2Me5GfRxcS0nR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},{"id":"ytc_Ugx5KvVz2ofbLUET79p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx-sYy84YtMcKXJ_tN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]