Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's why 😅 you sometimes need to
F🤬🤬k AI 😂🤣.
Sorry to say but we humans ma…
ytc_UgyrC_eWZ…
G
You can train an ai that recognizes and maybe un poison the data or remove it fr…
ytc_UgwNCl6w0…
G
based on what?
A shittily trained AI can be much, much worse than the average …
rdc_dfueh0k
G
So if AI works, we will all be George Jetson working three hours a day, three da…
ytc_Ugw02caPk…
G
That's the worst way of using AI, we r humans not robots, the initial vision for…
ytc_UgxFM3B4j…
G
I actually enjoy programming like this. It's always like a game in using the rig…
ytc_UgwqJeEKW…
G
Automation wouldn’t be a bad thing if it weren’t for the people who own the mean…
rdc_jcjwj0b
G
Me breaking my sleep schedule and finally making something that makes me feel pr…
ytc_Ugw0KrLoN…
Comment
It highlights the very real problem with AI especially as it is being used more and more in professional fields and is compounded if the professional user is not well trained in core fundimental understanding in their supposed feild of expertice be it engineering, clinical or any other field. AI seems to give in most cases an answer it thinkss you want. If you don't challange and word your questions concisely you will get false information that appears to be knowlagable. Basically put shit in get shit out. This is offen seen when students that don't understand their subject matter well enoughh, because they have not studied the subject and grasped an understanding to level they should and use AI to create their assigments. They are basically putting shit in and yes they get shit out and try to pass it off as work worthy of a high distinction.
Even with a well worded question for AI to provide a suond answer for a problem, AI can and will try to give you utter rubbish and if you don't know better that gets passed of as fact.
youtube
AI Harm Incident
2026-03-20T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmeQre6h9xa2MQZCt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwJURSNxAuiUNhoNE94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmKyOi0JffmvQQkzp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxSU8HMa3LRiI3QHVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3hPS2zM1T1pYwP9t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEGlL-BY7JaJXLV-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwWd9s_zh1Y_d40CcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx67vqJmjeYpyGHgMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGE_0NVlfL4opsTKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzZWKp4nQMa1wOtOV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]