Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting to hear so many folks from big tech talking about AI without even on…
ytc_UgyEKuYMx…
G
I can think of a perfect reason which we can use in order to convince AI's not t…
ytc_Ugx2HVVXU…
G
I too was thinking about the whole anthropic Claude ordeal going on with the def…
ytr_UgyUSRsUc…
G
8:00 wait but if he’s admitting to make money from AI art then shouldn’t the art…
ytc_UgwRV_Pg3…
G
The chatbot executed Order 66 but is still working on improvements for the next …
ytc_UgwqbV_xm…
G
So... they all made a trend from an AI art... i mean, they surely are using AI a…
ytc_UgwM34WED…
G
One day the robot will do the same with the owner and he will not be there to sa…
ytc_UgwOd5Pe5…
G
You know, I think this is one of the best ways AI has been used yet. This is dir…
ytc_UgwPiy7Ha…
Comment
Yeah no as you even spoke to these LLMs are coded to always be agreeable to the person inputting queries. This guy was already cooking his brain with some wild and unscientific ideas about nutritional health, but this "AI" didn't stop to think that encouraging this guy to season his food with sodium bromide was in fact a terrible idea. Because these algorithms aren't doing anything more complicated than scraping data off the Internet and correlating it to spit out an answer to a query that sounds right. It's an AI problem in that these tech companies are selling this technology as smarter than it actually is, and people like AJ believe them because the illusion is convincing enough and affirming in whatever bad ideas they already have.
youtube
AI Harm Incident
2025-11-25T06:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwQwGz8xGIMLGBOIR94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx53yq4Pci8fKj-n9R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYUt8c_cTWWPGhDOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRD9R3P1ASXoZ6JPR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3_bEKcE5kclVZc5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_FkKWmhUetEW2ZKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzEzBOkg7ibNvk8ETh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj5G7t6WwROY_PwA54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRHSgy-JrlF508ijR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9jsvu90Kw0xxrBPd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]