Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Every trade(craft) will continue to exist as a profession. AI does not lay brick…
ytc_UgzdWWIT6…
G
While I love seeing everyone's beautiful art that came out of this, isn't this n…
ytc_UgxIf0dZU…
G
I will not be willingly supporting or patronizing companies that are replacing i…
ytc_UgzhyuoCs…
G
Don’t underestimate governments about AI like they are slow always always rememb…
ytc_UgzyIDhz0…
G
just focus on details , you just cant say you will replace programmers buy ai ch…
ytr_UgyM4xPR6…
G
No more calling in sick, no more insurance, vacations ect. Wait a minute these a…
ytc_UgxePQYyj…
G
Pfft. "AI could be smarter than us" of course AI is smarter than us, it doesn't …
ytc_UgyLqgdyJ…
G
Pfft, it's ironic you say you're a 'real writer' when you use grammar checkers i…
ytc_Ugy1yUjin…
Comment
Excelent question, but I'd like to add something.
Recently Nick Bostrom (the writer of the book Superintelligence that seemed to have started te recent scare) has come forward and said ["I think that the path to the best possible future goes through the creation of machine intelligence at some point, I think it would be a great tragedy if it were never developed."](http://www.ibtimes.co.uk/nick-bostrom-it-would-be-great-tragedy-if-artificial-superintelligence-never-developed-1501958) It seems to me that the backlash against AI has been a bit bigger than Bostrom anticipated and while he thinks it's dangerous he also seems to think it ultimatly necessary. I'm wondering what you make of this. Do you think that humanities best possible future requires superintelligent AI?
reddit
AI Bias
1438016751.0
♥ 442
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lv8lnbd","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_lv8cgsc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_cthw656","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_cthxq37","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_cthzy1i","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}
]