Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Since you did computer science, which is a difficult field, do you like working …
rdc_ncfxoq2
G
I used ChatGPT and Gemini to help me writing simple iOS apps. It saved me time b…
ytc_UgzvCPkHu…
G
> There is sudden bipartisan consensus that we must help the poor people of B…
rdc_dsbcpvj
G
"Now I created AI, the destroyer of the real world" - Hinton, 21st century Oppen…
ytc_UgzAOrXJ5…
G
Narrow AI has existed for decades and is now present in all sorts of cars and ap…
ytr_Ugx3YEa9X…
G
Just wait, the coming storm for rights is just around the corner. You'll have t…
ytc_UgydWHLrm…
G
Disabled people don't need AI to make art for them... But tech bros do need AI t…
ytc_Ugx7udxdk…
G
Well, right now it's possible to hack your car. This have happened due to noone …
ytr_UgwOIjISM…
Comment
It always bugs me that we train AI using some of the worst things humans create. Deceit, blackmail, manipulation, and all of the plight that we won't eagerly each our children, we eagerly train AI using such. Copious amount of data is collected (generated these days) with no consideration to censor or filtering. Imagine if you feed the AI training dataset to a naïve biological entity, would you expect that to produce a well behaving, well intentioned individual? So, reinforcement training is used to correct for mal-aligned behaviors. That is like first let a kid develop any and all bad habits and traits, and then give it candy (or punish it) to correct its behavior. Yes, we lack data, sufficient good data. But in lieu of good data for training, we opted for dog poop. How much of the training data used for AI would you feed it to your kid? And we wonder if AI will mis-align?
youtube
AI Governance
2025-06-17T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxDu-nLYUgcsd1CFLl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyQqUVU1F21TexpHVF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoZBIshgK-tALnxXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxxK86eGp2nFh008t94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFLO3KUEV5LpP-bw94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzTHCbVw9xuUAC5fyd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTCIjHRBQ_eQU8v_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw4mVgGoNGLK-Ri-kl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgywQFK-p2unrM6NBW14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIlPNIaTn9_IPSd2N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"})