Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fr tho, like Ai is a genuinely useful tool, but it’s being treated like the plag…
ytr_Ugx2EK0rK…
G
If you ever want to replace other people with AI, replace yourself with AI first…
ytc_Ugx_l2sS1…
G
AI devs know how the learning process is done,
but once the AI finished learning…
ytr_UgwOkxVW7…
G
I’ll say it again, but if so many jobs are lost through AI, who is left to buy s…
ytc_UgybiY0pL…
G
Not really, rogue AI isnt problem here. Even when swarm malfunctions and targets…
ytr_Ugy-HHsZC…
G
so there are people who post some AI saying that it is her drawing of anger and …
ytc_UgxD0PTrl…
G
I think AI is great for a lot of reasons but using it to manipulate people shoul…
ytc_Ugybsl_LK…
G
Well... I was always a shity human (got into it by accident) so this AI is cover…
rdc_jigme1y
Comment
When you create a new species (AI) that is smarter than you, you lose control of your future. I went to Harvard for computer science. Then AI came on the scene, and people predicted it would be decades before we should be concerned. Now we know it's much faster and smarter than we thought. Businesses are going to use it as it doesn't take vacations, doesn't need health care, and the list of why it's better in all respects for a company continues.
We thought it would take longer, but AI will soon be able to program itself. This new world will happen faster and faster. LLM (large language models) were thought only possible in 2050 or so, just five years ago. They started out being as smart as a high school student, new versions were created, and now as smart as a college graduate. Companies will be forced to use them or perish. Much like businesses taking advantage of cheap labor in China starting in the 70s, and soon American workers were "too expensive".
youtube
AI Harm Incident
2025-06-20T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD2lWZqHZy1dSQ7o14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz_9iw7F5U6UvfUvyl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyZbzUIHwJMhoiFdRh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy17hjlShobN0rlaYp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNo_ZL1K8by6V3yO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9GO5-0ZgxmfRulKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyan9pTN0GLatkcbEh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXL4YW8hAvverSSjh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGL_Z5hmR_gqcBJw94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvpZtoRXKuHG4NqVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]