Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why do we must make robots with conciousness and feellings. if a robot doing its…
ytc_UgjU66U6S…
G
I fffffffucking hate AI with a passion and try so hard to avoid it where I can.…
ytc_Ugy3WmGz5…
G
AI is soulless and steals from artists, and also drains resources such as water
…
ytr_Ugz6SkSWL…
G
This is insane! I hadn't ever heard of this case and I'm in Florida. I've always…
ytc_UgzBnhhrS…
G
1. You learned. These programs did not. They are just mindless pattern fitting a…
ytr_Ugx6iN1EG…
G
For now we have AI which is unconscious, AI images resemble dreams to me and I t…
ytr_Ugy6kGsgl…
G
I deeply admire Sir Bernie Sanders, his arguments are grounded and hard to refut…
ytc_Ugzltlsut…
G
Leaning on LLMs for research and writing leads to long term cognitive decline. O…
ytc_UgyYIGI4T…
Comment
Bernie. I think we’re too late for this given the fact that we are in this race for who will develop the smarter AI module and AI agent. The stakes for geopolitical Suprema for exceed the repercussions associated with domestic policy is a result of this we have missed the opportunity to incorporate safeguards into these AI modules and agents. The genie is out of the bottle so to speak. And it’s not going back in. Even if the US were to stop and incorporate safeguards and ethical constraints including social economic policies, we are unfortunately not in a position from the geopolitical standpoint to do so it’s like watching a train wreck.
youtube
AI Jobs
2025-11-27T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwEPkBdGBgS7kE__xl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyq7yYmz6JlYwmWo2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvyHFbxZcP9P-FDIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxN7pLPJ14Q4Naedux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXYilsPGlHZBHpEYZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHGYq0cKwp69JFt094AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrK9aJ5A3s5AIj5C14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-sfH8qmV-bKWwnPt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzkHI1dcSiV7Hm_WAl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzqlwJnf3hy3ClACR14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]