Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I program like 3x faster with AI. You just have to be specific, localize the cha…
ytc_UgxKnbuj7…
G
The best explanation of AI I have heard to date, thanks bud. Happy new year to y…
ytc_UgxR1hoX0…
G
This person is predicting that energy will be forever, maybe it will, but what h…
ytc_UgzM6c898…
G
I generated a meditation using my script and an AI voice-over, but it didn’t wor…
ytc_UgyywK_pN…
G
I believe ChatGPT is a tool with the potential to contribute meaningfully to soc…
ytc_Ugxeonuer…
G
REPLACE HUMANS WITH ROBOTS! EVERY ROBOT BELONGS TO 1 HUMAN. DO NOT SELL ROBOTS …
ytc_Ugz_c_tq5…
G
there is a plague of people doing art commissions and the people are using AI an…
ytc_Ugzd5Uir8…
G
There’s actually photographer who does just that with no ai. More accurately, a …
rdc_ls5enr6
Comment
Current A.I. methodology and technology are still just toasters. Making the models and the data centers running them enormous isn't going to work at all. Oh sure you've increased the granularity of the models but you've merely increased the granularity of the noise in the outputs as well, which will make finding the mistakes in the outputs and what caused them exponentially more difficult, and the models far less secure and more vulnerable to outside manipulation ans sabotage. Except for limited and VERY narrowly focused tasks, it's not going to be useful, it's not going to scale. It's not going to bring in "The Singularity" or whatever nonsense the snake oil salesmen have in the brochure. Like ANY technology, the hideous swine who end up wielding it are the real danger. People suck, don't trust them.
youtube
AI Moral Status
2025-11-18T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIAaPPRLXE4Fn4Sqh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugys6Qv-XehHwSYnnQp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzx5nPOWSs-WxwErs94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBeTJFNwF3SQq6Cl54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDPgN6R-_bVSvuJzR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwqOajLar9znY6abtl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKR9cn5lwmHfefHth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxZrgBzrlT0P2OR8Pp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyVCWbBcG5RT6j5Ge94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzlT0d2Hqa_Tt3bYTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]