Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
YALL HELP I CANT GET CHAT WITH CHARACTER AI- I TRY TO SEARCH UP A BOT AND IT JUS…
ytc_UgwVUh2yO…
G
When a mobile AI can totally replace a 4U JBOD, then I'll start to worry.
We can…
ytc_UgxyppPb4…
G
This is the true cost of using AI. The amount of water used in these facilities …
ytc_UgzXQ6BWM…
G
Another controversial topic about AI that also has an economic concern is AI rel…
ytc_UgxuKOSEU…
G
I think it's funny because the fucking AI bros are saying things like "the progr…
ytc_Ugy6As73Q…
G
I mean it's not AI's fault - it's just absurd... However - this is a good opport…
ytc_Ugz4w-31q…
G
I write and use AI as my editor in chief trained in Chicago style ABC all the ba…
ytc_Ugw0X3zl6…
G
It won't be cheap. Ai will only replace manufacturing jobs. It's not profitable …
ytc_UgwJl_J4d…
Comment
This is cute, and wholesome compared to the alternative, but AIs are not humans. Once AI is more powerful than humans, it will accomplish whatever its goals are no matter what happens to us, and if those goals do not explicitly include happy, healthy humans, we will be paved over. No one knows how to robustly put a specific goal into an AI at all, and if we did, we wouldn't know exactly what it should be.
Also of note: The object-level research on the safety of future powerful AI systems does not rely on anthropomorphism, but it's difficult to communicate anything about AI to a lay audience without using anthropomorphic language. Besides, sometimes the terms are just as suitable for AI as for humans, after you get through all the relevant disanalogies.
youtube
AI Governance
2025-08-28T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzFQoiUNK8cFydkzKZ4AaABAg.AMIoJqH6TTSAMIo_Ra9P0y","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzFQoiUNK8cFydkzKZ4AaABAg.AMIoJqH6TTSAMIxvCj8AOx","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugysvu635SbhyhUtxjB4AaABAg.AMIk6oCb5bNAMQmhuQbzhg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxEkAZjHJit6O2aAeF4AaABAg.AMIjzr5IP0WAMIvkCXufnL","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw3xVFrOOkMAWMNEvx4AaABAg.AMIjozlplyIAMImtlHwHsr","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugw3xVFrOOkMAWMNEvx4AaABAg.AMIjozlplyIAMIpgIrJQMn","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugx-qhBJdWkdN46BMy14AaABAg.AMIjawGQVN8AMJ7k-Uiga9","responsibility":"none","reasoning":"none","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx-qhBJdWkdN46BMy14AaABAg.AMIjawGQVN8AMO1qxsX_pK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugz2v6fCE0RM_biY1ux4AaABAg.AMIjEpbYht2AMIjqPADmTa","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytr_UgxfjUXo_FR_ikQV_O94AaABAg.AMIiksp6iTHAMJGjaIve89","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]