Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Self driving vehicles can't be safe until they are fully trained on abnormal sce…
ytc_UgyLQ4NV-…
G
AI still has not gone through the trough of disillusionment in the Gartner hype …
ytc_UgwR4kNCM…
G
magic might liberate humans again after being demonized , after all can the supe…
ytc_UgwihZcPm…
G
I think so long as AI is open to training we can potentially input important lim…
ytc_UgwbtsOP-…
G
Have you ever heard a load of bs. This will never happen because humans are grea…
ytc_UgxddJo4E…
G
Another way to look at it is New York cities metropolitan area has an economy of…
rdc_lp8ivl0
G
Every robot that takes over 1 or more human jobs should be taxed, just as a huma…
ytc_UgwE-aAYv…
G
at this point I don’t think ai animation has any chance of overtaking human, it’…
ytc_UgxzP0wQZ…
Comment
It is really true, human is pretty freaked the way they are, and it ever so rarely is; the human who uses their brain of rationality.
Mybe Ai conscious could be a good idea, especially for good; I mean.
Let's be real.
Humans are the most destructive species to ever exist; and with how crazy and lunatical; current way too self-opinionated-centric society and agendas are going.
Human stupidity is too much, that even AI seems better, even if I'm against Ai replacing humans.
FACTS ARE JUST FACTS.
I said this before; if people don't fix themselves, eventually someone with an iron hand (immovable, unpreventable) will force the fix....
youtube
AI Governance
2024-04-28T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxr1OSFgC3jW750LF14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgycxNhk1Ps54vcsynR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhnNP6Np6bq9aYlZZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKkyW3j1nZww2ORrB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrlJ_zJ8utaYY9aSB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwCw860LKZX4_a-YpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyhU0EswyzoN17IHMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaAR2js97VyUKoIrh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCtJIQXttkBvLqsGt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0wa54PswTYyUUNl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]