Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You support Tech Bros when they say Niteshade might work, but you don't support …
ytc_UgzOh-sLV…
G
Artificial intelligence will be so smart and so efficient it will rewrite its ow…
ytc_UgzBgMR6W…
G
Will ai be smart enough when it does become conscious to not let us know?…
ytc_Ugy_ss8pf…
G
Pure fear mongering, just like ai is a big excuse for business to lay off people…
ytc_Ugxn3U1qC…
G
now they can copy and paste retina maps and finger prints into their AI models s…
rdc_oi0ffil
G
it’s been that way for years. The same year they invested billions in AI is the …
rdc_oi3tn8w
G
To your original question (on accountability when AI-assisted targeting is being…
ytc_UgxZwUr5B…
G
@irrelevantonyt Okay, I can see your point and where you're going with this, but…
ytr_Ugz0MXRCW…
Comment
I've experienced this first hand with ChatGPT for medical information, or anything else for that matter. It tends to hallucinate and give you contraindicating information. When you press or correct it, it sort of admits the mistake but glosses over the correction further down the conversation. Luckily I do my due diligence with everything ChatGPT presents as fact.
youtube
AI Harm Incident
2025-11-25T06:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyBT9poAAMTZCikqcZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzgUe6Zwi3KFYjq-4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwp6itWhN9NK_yJWU14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgxLTecsnkYpLpPn0rF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgylpSPoKXckh-4WczZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwzQV3gRkI-5pjk4Nh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzQwY7JjRucGYFY1bJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzRnD2Me5GfRxcS0nR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"resignation"},{"id":"ytc_Ugx5KvVz2ofbLUET79p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx-sYy84YtMcKXJ_tN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]