Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The world is incredibly naive about AI. You must research the scientists that ar…
ytc_UgxKs0Tle…
G
I think the responsible thing to do is obvious. Completely destroy every artific…
ytc_UgzzCzr6v…
G
Ai will not Destroy humans , humans self Destroy there self because of corruptio…
ytc_UgzGi7Aki…
G
Why don’t you guys have that same smoke when AI is threatening to replace softwa…
ytc_Ugy_7tOWQ…
G
It already is and everyone knows almost evryone is just in denail. (this doesn't…
ytc_UgxZmgJDQ…
G
@Bug10ten
What I said was an example to show how AI can help in different areas…
ytr_Ugy0lOF8_…
G
There is NO way how an AI will eve replace hollywood screenwriters. See, AI will…
ytc_UgwK18Lby…
G
The models are made to extract and exploit, especially in the global south. The …
ytc_Ugzuuo1rI…
Comment
Hi! I do research in AI (specifically social impact) and you are SO valid for hating on these AI deployments.
I really appreciate how you focus on the application and not the technology.
Stable diffusion, transformers etc etc. *can* be used responsibly and without making life worse for humans. It just isn't.
I love making algorithms, optimizing, viewing problems in weird and wild directions, but it's a genuine source of fear and depression that something I design could be used to hurt someone.
I literally left my previous PhD advisor because he called me a technophobe for talking about regulation.
youtube
Viral AI Reaction
2025-03-18T23:3…
♥ 14
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxsHlyYUsyrWeQ7vCh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxyTuIqRVM2Sbm2FhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzFNKAVdGWTN-Xm75l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxnZ8tEhRZ5w4sLvXR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzvIbihWjlKNXZuUKF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwBjdCsjhtTaOMuq4t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxEBqLg34vlgAbG4ml4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDLSLaX9WbHi77vGh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw9U-p6lvIj9qdZu-h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWKmp2J7Eal_U5sQN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]