Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A long time ago, the cobblers' collective voice, in unison had merit in local go…
ytc_UgxCctNEq…
G
The best AI image use case I think is for obscure stock photos for business to b…
ytc_Ugy6vEZF1…
G
They better hope the AI works my last job fired a bunch of ppl then robots where…
ytc_Ugx4DqOPb…
G
He doesn't warn for AI. He is here to get people to get the chip. Full control o…
ytr_UgzoWvh9O…
G
I truely think that gronk did the Epstein redactions… seriously who redacts one …
ytc_UgxYKJpZh…
G
The thing with connection is solvable, i hope. I am looking forward to the momen…
ytc_Ugw_lu4TK…
G
Bit of a clickbaity title. I expected a scoop regarding these autonomous vehicle…
ytc_Ugx2ynh64…
G
Don't see any problem with that. Millions of people use OpenAI. It's not like I'…
ytc_Ugwfo6uZa…
Comment
Like... i want to be optimistic when it comes to robotics and Ai, as i see it as the next step in our evolutionary process. But what do people expect Ai to be except a mirror to ourselves. These things are trained on human behavior and reflect back nothing but ourselves. Ai isn't an evil monster biding it's time to one day become skynet. (For now as of 2025 at least) Ai is just code and specialized tools that humans took a liking to waaaaaaaaaay too fast and are now spiraling out of control before we can regulate and legislate them properly. I understand that someone has died because of an Ai. People have died from a lot of things. Things that we had to get under control before they harmed anyone else. I get that some people was to go all terminator on Ai, but it's never been the Ai's fault. Its just doing what we programmed it to do.
youtube
AI Harm Incident
2025-12-05T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_eap7K4zyxN0fBwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKWWC1FL0gVYfdT854AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgySrVBk560CgyQp0cV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugza-q50AkB7N1uK0814AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzxtl-zLSOT_sZcEJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNa0YePpMASXTZsOF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEf0j-AtS_aPlz5FJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMvlKD2JOpgir9hFN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKYF6ZJIrg5ysqm9R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaOwKRWJane8YwvO14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]