Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
no offence i hope you can take that with a grain of salt .. i just woke up at 19…
ytc_UgyirkLFa…
G
While I understand the power of Western propaganda, I don't trust any world gove…
ytc_UgxaMhsCz…
G
Forcing the Ai to train obsolete humans. Just put the Ai in charge of the machin…
ytc_UgxFA-CLY…
G
If they keep firing people and replacing them with robots, eventually someone wi…
ytc_UgwAqpC0N…
G
That reminds me of that that time I was looking at my wallpaper which was the p…
ytc_UgwOVauFv…
G
The advertising plug totally broke the immersion for me. Now I'm not sure how th…
ytc_UgwKWebbc…
G
This person really needs to do his research. Bill Gates, Elon Musk, and Geoffrey…
ytc_Ugw8J-_x3…
G
It's not just art. I'm a Sleep tech. Half my job is sitting at a computer and in…
ytc_UgwLDIUX6…
Comment
Disagree, I am a software engineer and I have been following AI for a while before the hype. It's actually quite scary how fast this field has advanced in the last 10 years. It's already pretty clear that this will become more and more advanced. I am not saying that AI will take over the world. But whoever will have control over the most used platforms will have control over the information we receive, our educational systems and much, much more.
The dangerous part about these systems is that they are a like black-box. You give it some input and output is rolled out. However, you receive little to no information about the process itself and the dataset that is used to train it(unless its open source, fairly uncommon rn). It is very easy to exert control over people this way. As most of us won't be questioning what the models spit out as output once this becomes a standard.
youtube
AI Governance
2023-05-30T16:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugzg1DHNpb4zFsWCdQp4AaABAg.9q-e2UH5Z1V9quTXzU0TAa","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwmQ6GVT6aPmwYsIPB4AaABAg.9psaFvl5Yk09qH231rkeOp","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwmQ6GVT6aPmwYsIPB4AaABAg.9psaFvl5Yk09qLPBnjHj7x","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwmQ6GVT6aPmwYsIPB4AaABAg.9psaFvl5Yk09qLPuoI7Bj-","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwmQ6GVT6aPmwYsIPB4AaABAg.9psaFvl5Yk09qLVyLgSNgF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxB-5Baw8hIxP1nEo54AaABAg.9pqIb5MsiJG9r6oaEVH32I","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"concern"},
{"id":"ytr_UgznWis1WuoVKan8X5l4AaABAg.9ppTtKpba_K9pqmnnYGN-R","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgznWis1WuoVKan8X5l4AaABAg.9ppTtKpba_K9q9wqP6AMkZ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwfYWksj3nR-eWq0Td4AaABAg.9pn8bOrToyA9pxn-zPYho4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyK4J77nrbuJ3xmMwR4AaABAg.9pn6zouu7pv9px0_fADIsr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]