Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would love to have attended a school like this. My gosh. Not just because I'm …
ytc_UgyAjroIR…
G
Current AI is essentially a probabilistic echo chamber, word salad shaped by rei…
ytc_Ugx-hMF0T…
G
uhhh, actually this is what AI is, it is infinite. So no matter if it takes cont…
ytc_UgzksXuR5…
G
When it comes to AI and the AI industry, there are three people I will always wa…
ytc_Ugz-DYsm4…
G
Driver less no. "Self driving trucks", many of these rigs will still need a pers…
ytc_UgwN6BkAq…
G
translation: there is finally a winner in AI and its Anthropic.
good for them.…
ytc_UgxPXowAR…
G
China is doing too much in AI technology but people still have job. This AI batt…
ytc_Ugw_lTjyX…
G
I hate AI and people who use AI are idiots. One of the teachers in my school con…
ytc_Ugxv1Xzsi…
Comment
Thank you, especially on your take of the Alignment problem. It reminds me very much of the videos I watched last year by Robert Miles on his personal channel and the Computerphile channel he's part of. Two years back he was talking about how AI lie all the time, and this was before the release of ChatGPT. The alignment problem is the biggest issue to consider, because an AI probably will never be inclined to follow the "spirit" of an instruction. I feel like human history is just on a big loop, and just like the evil djinn of old, General AI is a great power that cannot be placed back in it's bottle if it's allowed to ever have the upper hand in any capacity.
youtube
AI Moral Status
2023-08-20T22:1…
♥ 35
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4LmweJWCyvg_WLT54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlUAjvg07Gfn40e_94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzyE2bKu9n3YwAqQ3F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8pVMDZ8MhE1Gyf8Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwE0UsrNw6z2lzTUHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz6cwHuglGeZ4GYB-p4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMCNAnH3scCR_NkDx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw071Ztqhkg0exG7p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNbshG2oVOuTOZo294AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsFiHWjq4cPPisKdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]