Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone will have an AI assistant. Like Siri but vastly more intelligent and ca…
ytc_UgzzdDrrA…
G
I think AI has value, for sure, but not in every circle. AI will be invaluable f…
ytc_UgyFfHX7R…
G
The hype that AI can “replace human intelligence” in almost everything is fundam…
ytc_UgxfWvPG-…
G
AI detectors literally just guess. I've put in my writing and an AI paragraph an…
ytc_UgwbIefQN…
G
Im surprised that IBM isn't getting into the oppressive algorithm business. They…
ytc_Ugyw6cVxM…
G
Are you aware of all the leading figures in AI who think this view is wrong?…
ytr_UgxtkYL7g…
G
Keyword in there is "today". Because AI isnt perfect today doesn't mean it wont …
ytc_UgxH2eMZs…
G
So AI inversed the roles of human-robots. Now AI makes business decisions, desig…
ytc_UgxfwpHHG…
Comment
In a nutshell...the tech driven industrial way of living has led us to this point. Has the tech driven industrial way of living on earth been advantageous in the long run? Of course not. Indigenous have lived for thousands of years and in maybe a few hundred we "civilized" folks have created one disgusting scene after another across our globe. For humans, for animals, for nature our earth has become a sad expression of ourselves. AI is a result of this timeline and way of thinking. It will not be good, it will ultimately be like the mechanism and thinking that created it.
youtube
AI Governance
2024-05-01T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBjYfrpYmUN9qz_Mx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_UgyrhWjs6plXf_0aCst4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxBgBcIZVjwJfG-ZS14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzXVnoQ8OKOB026aSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNpak4WKulyQb2dEN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxqo0_q6DSi7Cev_-V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyIp_ULPiLj_DtLMvV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxa6Vfjhbvzwy952OF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztDtPUbO_Gp2DKljd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzZTzvPYZbSQAWOIhR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}
]