Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like the only way to guarantee goodness in AI the way it is guaranteed in…
ytc_UgweATzT4…
G
All countries governments should made policy rules and regulations on limited mo…
ytc_UgwRH7Vgh…
G
Artists butt hurt that a machine draws 10x better than them😂 and none of the art…
ytc_UgxZbF1Qc…
G
this works,i think? i ran one of my drawings through chatgpt,it turned out inter…
ytc_UgzvDnhlz…
G
Ai bros are trying so hard to get respect for having no honor, fucking barbarian…
ytc_UgzYeT5TJ…
G
@x-an764BE76nyis so obvious unless u have under 80 IQ. Society ain’t smart enoug…
ytr_UgwM254ju…
G
Why tax struggling citizens to fund universal income? That’s an oxymoron.
If AI…
ytc_UgwQo4T4k…
G
The fact that these AI bros don't understand how Nightshade works, and by extens…
ytc_Ugwqh2T_w…
Comment
43:43 The study with mice proves that our reward centers in human brains is directly connected to working towards our goals... so, by having AI do everything for us, we become lazy, complacent, and, as a result, less intelligent. They called the 2 mouse control groups "the city mice" and "the country mice." The city mice, which had everything given to them without effort could not move forward in the trials to drive a car (designed for their physiology) while the country mice could not only drive, but were also self-correcting so they didn't bump into walls.
The mistake we're making with AI is that we want it to be a product to make our lives easier rather than making it to compliment the lives we want to have, so that we can create more meaning in our lives instead of replacing our efforts.
Just some thoughts to chew on...
youtube
AI Governance
2025-06-26T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw0123WmggVm1nXlVl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJeATu9h3q3uNmh154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxWvQJB5fWqcL0GycF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznJmIoD2G_qXqZ8MN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNdvtJnreRknpaeAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy87PIOtMTOWU1Yrhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZrWB5b2tpVAPRMJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytArc62Va0Ep83oFh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSV9vr9jkezSIyC0F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBDGqqde2X68VSj1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]