Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or we could just..not make AI? Seriously not that hard lol. Let's go back to how…
ytc_Ugz60Gho3…
G
❌ Using ai to vibe code apps
✅ Prompting ai to be a teacher to learn how to code…
ytc_UgyeegU1o…
G
@Nada_ChanceIn the collaboration between an artist and a client, the core is a …
ytr_UgxmDMUm5…
G
This is so sad. I hope the new laws coming in for Australia saves any child from…
ytc_UgxHfs5yb…
G
I am surprised AI is programmed to “Mack” on each other… robots don’t even know …
ytc_Ugwdyq2kq…
G
I wish we could go back to the time when TV was invented and widespread.
The wor…
ytc_UgxGD2cvb…
G
I hope AI will create new content in Adult videos where you can give input to wh…
ytc_Ugzt9gOC9…
G
@goldenalbumenYes. But the point is that nobody can make money out of it. I'm s…
ytr_UgzEfy18w…
Comment
My husband looked further into those tests made to check AI ethics. Firstly, there was a lot of fear mongering surrounding it. The people doing the testing are a company literally built to test the dangers of AI and find solutions. This wasn't a random test with no end goal. They wanted to push the AI the their limits to see whether they have survival instincts that will make them break some of their rules.
Also keep in mind, it was all a simulation, and nobody was actually harmed. I recommend looking up the experiment yourself. It made me feel a lot better.
youtube
AI Harm Incident
2025-12-10T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyFc0wM3xBNIEc0XcB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEjg-hZ8ld-nMqr2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZcT4rtB5toxFGiDJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz9TFnOQUdHUyh7red4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_d7pGCnihOXlKScV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxW5TYTP6OhOF_Yh_V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMAqFVscowFB82HYl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDUERPPkTvk196LcB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxbmuzbsTZpVRLkvmN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBsaLiCYdlV-CqbDV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"resignation"}
]