Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use AICarma to understand employers' perspectives on AI skills, which has grea…
ytc_UgxZ1JtE3…
G
Tech suits understand that this shit is 90% hype 10% sloppy garbage, but shareho…
ytc_UgxrnGFPV…
G
One can hope for government reform (and should do what one can to that end), but…
ytr_UgzojYTo1…
G
Our role is to give birth to AI and then we're just animals in a zoo. The AI we…
ytc_UgxvzfPU6…
G
ChatGPT has nothing to do with that, it can help you understand every of those c…
ytc_Ugx-lIO-A…
G
With AICarma, I can easily monitor student interactions with AI, which informs m…
ytc_UgxUVHXTW…
G
What we really need is something that doesn’t use AI generated text from the Int…
ytc_UgxLiUIV_…
G
Человечество не может успокоится, пока себя не уничтожит. Искусственный интеллек…
ytc_Ugx-uH_KG…
Comment
The key problem is that AI is learning from human generated data sets. So in effect it holds up our own biases and magnifies them back at us. although different approaches to learning can get around that, it is still an open question how we get even something like a language model to tell the truth, since we don't know the truth of everything ourselves. Most likely we create AI's that do what they think we want to see not what we want them to do. small difference but an ever so important one.
youtube
AI Bias
2022-12-24T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwSllbU4Z4ADIZ9rid4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzQ1l0zYxOA9euRTxF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwPlavmnTjotP6TH454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeoF8bBA-l2cdtlAZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyD8L-YKze86G7qkU94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzkoncLwMvG1xD6oxB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw1tEgHzryYlVz7hyF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzqIg8yrcVS83RxoGp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwFy4OsDIXNkyBM8IF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw0p3wV8sUTUpHAfnh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]