Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, I was interested in the title, but within 2 minutes, I can already tell th…
ytr_UgwFbI3ud…
G
As someone working with AI, I fully support this! The audacity of corporations l…
ytc_UgzXGcpb6…
G
Why would AI help us solve disease when it will likely conclude that humans shou…
ytc_UgwgjbPq1…
G
To my fellow artists:
AI is an amazing tool. Give it a chance, you’ll be glad yo…
ytc_Ugx_b6NP9…
G
For years now, ever since I saw the video "Humans Need Not Apply" by CGP Grey he…
ytc_UgxfNqKn5…
G
Unironically I think letting a robot create for you is just as bad as letting on…
ytc_UgwEaJN0e…
G
This happened because AI is allowed to train itself. It will support a person in…
ytc_UgykYrYe1…
G
These AI creators are basically inadvertently preparing us for doomsday. These e…
ytc_UgzfunqjK…
Comment
What can i say. Humanity never learns. And the funny part is that there were movies centuries ago showing the threat of AI.
Its only matter of time people to live in fear just by typing something against the AI as everything most probably would be monitored much more that it is now. And as we are moving to an electronic life, placing everything on to the cloud you could end up with erased identity. Even if not intentionally it's an operating systems of some sort. Bugs are in it's core. I would imagine that hackers would also benefit from the AI ......so more or less we are screwed.
youtube
AI Governance
2024-01-01T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw7QquN4J4Wi_Myv214AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw1nY-wnhr4WY8eyyF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwd8A7YYhFXAwoYHqN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy6bHMqiVFNvj63ZMp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzeJ5RRTdTQ5KgQcEV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDyFZS2q3ImOnP6S54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz1FyT2a-VD2CvOfWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0IJDMCL_QE7oF9Xh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzIdQGvSa4-lY05Hc94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQ4Lm729wkbMFwj_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]