Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Being “Nice” will get you nowhere. What you fail to understand is we will not be…
ytr_UgxvAmcpD…
G
New architectures will definitely be discovered. (Unless we nuke ourselves back …
rdc_n7zg0ur
G
Please be careful. These tools can generate powerful-feeling experiences—but not…
ytc_UgxuHdSOZ…
G
When medicine was discovered, people want to live forever, when chemistry was di…
ytc_UgwsVAXX5…
G
chatgpt is NOT watching your chats, microphone, screen, and stuff and does NOT h…
ytc_Ugx574SuQ…
G
For an Asian man to say that kills me. What happened to the one that escaped in …
ytc_UgxN0f1j-…
G
Executives with no talent will control AI. They are literally stealing from all …
ytc_UgzS1fceA…
G
Universal Basic Income.
I keep hearing about this but I never hear anybody expla…
ytc_Ugzq-x_ug…
Comment
I'm a researcher in AI, and unfortunately this is just a prominent example of a larger problem in AI development. Our current most effective methods are extremely "data hungry". The only way to build an AI is with massive datasets, and thus scrapping any and all information from the internet has become the norm.
Kate Crawford does a great job outlining this issue (and others) in her book "Atlas of AI".
youtube
Viral AI Reaction
2022-12-29T00:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyfpC8Md99zDS7gnMN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwIzruNnxXuRePe0xl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwYIpQojiSh8rS8vkF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwZA0i5zgR54bMpAOh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2WlBeFykivcHwnWx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwhAHXoKR6XEJkZu654AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7yuRrgDqg4gKESCp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwF32AzdmM9GvH-xTF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9lhHrfw9Idh9hXJl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzEapk5HLQPqfKiL2h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]