Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I predict a future where countries have digital massive "AI kaijus" that constan…
ytc_UgwhVkee8…
G
Tried GitHub Copilot, but I ended up going with Axalem for its clean focus durin…
ytc_Ugw-Mnvdf…
G
Being upset with the process and the tools isn't productive. AI is here to stay.…
ytc_UgzITqF_M…
G
scariest part of human invention is allowing ROBOT to use a weapon ( NOT SAFE )…
ytc_UgwWbDVgx…
G
Kiiiind of stretch clickbaiting with "be polite to AI", going into a total non-s…
ytc_UgxewpG0M…
G
This honestly sucks! I’m starting to believe it’s getting harder everyday to be …
ytc_Ugxuk8rQP…
G
They can also do first principal research and come to it's own conclusions. No h…
ytc_UgyxAxMNb…
G
The pursuit of self-driving has created a generation that sees a car the same wa…
ytc_UgxFtU7bS…
Comment
AI safeguards are useless. You can prompt the AI to give you any answer you please. Even if it warns against something, you can say something like "Well I wasn't planning to do that, I just want to know the answer to my question". And it will assume you're not going to do the dangerous thing, then go on to describe exactly how to to it.
youtube
AI Harm Incident
2025-12-16T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw6VXLpThTbG6TXO1N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNmsOIXU9-696LeJ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_NgzSMqFTUzTG8Rl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQlHLMQlOA9JpMRXl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-cAwbTuO_h8P7du94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxA21bdgfDnxxRVr8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSQ9Iv2kcoYDRF3Fp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQA-4u2TKcjapFfgd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy-wCuIpj7fMWnBfDF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8-C8uCN40B-pUYAJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]