Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
SOMETIMES! The best "AI" models have a hallucination rate of 83%, which is a fan…
ytc_UgzEtQEJq…
G
@patientzerobeat Yeah it's not conscious but I was curious because a human devel…
ytr_UgwuL8t0x…
G
Things change when there's MASS unemployment and all the wealth is being concent…
ytc_UgwyjdDsg…
G
Until I played Detroit Become Human which is an exclusive game for PlayStation, …
ytc_Ugyc5fxb6…
G
No matter how many "pretty" images ai make, im sticking with drawing the illustr…
ytc_Ugyxo1OKz…
G
Rail companies have been working on positive train control aka PTC for almost tw…
ytc_UgwPHOefv…
G
Me and my AI have solved every mathematical equation and combine them all into o…
ytc_UgynBJjBj…
G
ITS REAL EASY TO FIX THE A.I. QUESTION...............................GIVE THE A.…
ytc_UgyXNvvdb…
Comment
Don^t fall into this trap of paranoia. Yes, AI will replace jobs and automate a lot of tasks, but do you know how hard it is to implement these projects? You need a lot of expertise, a dedicated team that focuses only on this, trial and error, operating it etc. Fail rate should be kept even with AI at 0.01%, something humans also had to keep up with. And from my expertise, i can guarantee you that there is a lot of learning needed for an AI model/Business case to be able to keep a 0% fail rate and still, you would need humans at the other end to check it's accuracy. Realistically, you can ofc automate a lot and teach an ai what to do and you can indeed do it with say 15 people instead of 25, but still, a lot of human intervention will be needed. Otherwise, the human race would hand over the world to the AI and do nothing anymore.
youtube
Viral AI Reaction
2025-12-21T21:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz_qWfn7B-MLyFURgJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgylKJ3n7VfH-bcw7e94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxfk8YzPIcggOyWS8l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwPfsAREwx8SWjtJVl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6usvCw4L_IgE3fEF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwD9o47tvL31Dzb6xd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzfVOgrhDBuclTawyx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyMhcbMH-wf5iB4jh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIaXjJpJHamuxmtMF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgOuPEUtrEz5wD1ep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]