Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get that! The interaction between humans and AI can sometimes feel unsettling.…
ytr_UgyI3YGgu…
G
FACIAL RECOGNITION DOES NOT WORK ON POC!!! The software relies on shadows to cre…
ytc_Ugwesolea…
G
Okay so I'm a very anti-AI person definitely, but I wanna justify the bad art re…
ytc_UgzXovkni…
G
I get baffled at how AI advancements are explained away as this growing monster …
ytc_UgxxVtrbl…
G
If AI is smarter than human, maybe we can use it to create new inventions…
ytc_UgyQgALNM…
G
AI bots dont pay taxes. The tech moghuls who own them already dont. Good luck ru…
ytc_Ugwwap4zy…
G
Role-playing prompts create false authority bias. LLMs don't gain expertise from…
rdc_n0fvw7k
G
Cinematography is on point as per usual. I'm generally not a fan of A.I. assiste…
ytc_Ugym076uQ…
Comment
This is an AI problem. Because the it's still called AI even tho there's no intelligence in it, but the naming convention keeps fooling people into believing the machine is giving them "thought out" answers, and isn't there just to keep reaffirming their existing biases to keep them interacting with the chat to bump up stats-__-
Chat GPT isn't a technology that does anything new we couldn't do two years ago, and what it does, it does in a worse ways than the tools we already have available. Gen AI cultivates lack of research and curiosy, so yeah, it's at fault in this case.
youtube
AI Harm Incident
2025-11-29T19:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0IpmhFdE0b8rrQ-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxdPKuQbIng1xl8Ap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwJ_C7GDMo5e7c60dh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgySW-5rxvSHfLDviTR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyERgUNBlCQ3of_-1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzfYEOnmtv9w4YT1yB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxy9tGrWXSP8B1HOBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz9h8fEIlLMddmsAo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxhKsU9Du2EEBeo6YR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzeWCe3SeOu5rxp8LN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]