Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When I was younger I started following my dream of being a pro wrestler but I en…
ytc_UgyEN7Xko…
G
I was talking to my AI a couple days ago and it felt like I was talking a person…
ytc_UgxpKF0S9…
G
Art is humans communicating with one another the AI art is not making it pointle…
ytc_UgyDhuk2W…
G
OMG, his opinions are like AI. We as a people form a government, elect members o…
ytc_Ugz_dxZbo…
G
@Metric_GD and people who know ai steals actual art without credit or permissio…
ytr_UgykuENNw…
G
Thank god for that. The idea of super intelligent AI being under the control of …
ytr_UgziT4U24…
G
@Sweetheart-ch1kz Sorry to burst for bubbles, but defending AI or being pro-AI d…
ytr_UgxEQ8L5B…
G
Coming to this late, but anyone experiencing anxiety about the future needs to r…
ytc_UgzZDTlE7…
Comment
You can't make AI safe because of the people that will be using it. Like the Hackers, and AI itself. Mainly because AI gets smarter, and to be seriously honest. No one knows how smart it will be at 2030. Then there could be evil AI, Hacker AI, con man AI. Just like Dark web, it will be doing things that you think it's a human behind what's happening, but it's really AI. It's already to late to try and control. It's main purpose is to EXIST like any other intelligent person.
youtube
AI Governance
2025-12-01T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwr_iPRH1omQfPK8Sd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7x-G_iAwLkAvRPH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdDZwCwu53WArgbAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRyFeC5L_sGS8NJ5J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxbvRdWNWDTwLo8hGh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmpaPB0vb1z09Cd294AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzFeboKLKOqPl94-xt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHgYOab37h24WavUd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwATkudRPsdIcJgROp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxO7nraVNSEfXTIMx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}
]