Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hope theyre right and it's just another nothingburger. BUT.. Those other situa…
ytc_Ugw2U-adf…
G
Or it means there isn't a strong link between "good employee" and features of th…
rdc_e7jkz8m
G
@1:47 - We have no idea how AI models works in the first place. Trying to sensat…
ytc_UgyyOU6yw…
G
AI is the wrong term. We don't have AI yet, we have MI, machine intelligence. Th…
ytc_Ugzs5rwqH…
G
Isaac Asimov: In the future, robots will take care of menial labor and dangerous…
ytc_UgzeZni9S…
G
What about circular input? Already any active AI must be running into at least a…
ytc_UgzQkGh0Q…
G
AI is dumb, can't even add correctly when numbers are bigger. not sure why peopl…
ytc_Ugz50qdvc…
G
New presidents are always congratulated when they win an election. It’s diplomac…
rdc_gbhtxik
Comment
Not to shoot my self in the foot which might happen but Ai is not that great. I have had over 40 hours of Ai training at a tech giant and it still doesn't work correctly. A prompt run one day will have different answers the next day. Whatever company uses Ai to perform activities or make decision are liable for its errors. It helps with coding for sure but its still not consistent. Its a ways out from ambiguous decision making as its token weights don't always make the proper deterministic answer. Increasing accuracy from 32 to 64 bits isn't going to happen in this decade unless hardware gets cheaper and more powerful. I use it daily I don't trust it and having to check its answers just get me closer to 5pm and weekend. For those that fall victim to Ai errors you need to find some good civil lawyers that are willing to go toe to toe with big corporations.
youtube
AI Jobs
2025-12-07T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzr2JVi9Fx9rnI8bM54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyM3nZyoyM_qYGYOxN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy3d1IRyUmwB0-muLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwhv_KBXn6wwcl3BB94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzS7LexNJOsbxzTghZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgznNMI565OwGNEUOuB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz0dBmA_z3ggSChqCN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGkFjWWHzraQnbsfB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvUVA9DxWrm2DH_GJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgycJuHnppjnyIcxAnh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}
]