Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
_The funniest thing to me about the extreme 'AI' hypebros is that they are mostl…
ytr_UgzBp51Tq…
G
It takes a lot to get an LLM to tell you that it is, in fact, the jews, but it e…
ytc_UgxbaPwnP…
G
@teasippers2776the difference here is your information is being used for randomi…
ytr_Ugx7tr1Os…
G
two reasons i can think of:
A) artistic: claiming you're an artist when AI is do…
ytr_Ugw3Qx_TO…
G
Compliance businesses such as banking, government, insurance etc. may not use AI…
ytr_Ugyb5c66Z…
G
You could argue that the reason why AI art exists is because you’re not poisonin…
ytc_Ugxo-fR6a…
G
This #AI app is far LESS dangerous than nukes: Otter https://otter.ai takes note…
ytc_UgxIyxK-b…
G
I talk to characters more than real people. With real people I just smile and wa…
ytc_UgxzBQd41…
Comment
Such a pity the debate got stuck on the meta level. OpenAI has been fine-tuning behavior in GPT 3.5 for months by simply rewarding friendly answers. The result? In the first days of its release, it's been threating people for questioning the fake facts it's been telling them or claiming it's hacked web-cameras & spying on people. It's relatively easy to create an intelligent system (you can reward correct answers), it's infinitely harder to create a system that thinks based on moral goals, because we don't know how goals emerge, let alone how to correct them once they do. That's a very technical problem Mitchell just doesn't seem to be familiar with - the concern isn't that a superintelligent AI won't get what we want but that it won't care, just like we behave differently from what we've been selected for by evolution.
youtube
AI Governance
2023-08-02T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwavikaAMC_ucQ0x9h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6IOqZwMcewU2CbuV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtdjIuSgwcMRt016J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw4-agCdVl3pjy4Hfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHjX_f4QKz6AB1RVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQ0DtmFQxkrFu2X2J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgynLYZwDYaGncX1JJB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJwQ_qPwcXl7w6jjl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxROEnFnRwgta-ItIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8T12PciZCtmv_UGp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]