Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk why, but I’m picturing an automated city with only robots roaming its empty …
ytc_UgyYoioi_…
G
this it horrifying actually thanks i hate it. ai is already telling people to ki…
ytc_UgzqvXTc3…
G
A Christian Prayer for AGI, Wisdom, and the Future of Humanity
Heavenly Father,…
ytc_UgxCwtjYj…
G
What if u draw it then chatgpt urself as Ghibli one then see which one looks bet…
ytc_Ugx8y6rTy…
G
#tl;dr
Auto-GPT is an experimental open-source project that showcases the capab…
rdc_jf74785
G
The parents are also to blame for letting ChatGPT become a bigger part of his li…
ytc_UgxWZm5GB…
G
An AI can change its output (different from limiting) during a test because it h…
ytc_Ugw-A34CT…
G
For argument’s sake, would also be interesting to contemplate how AI will impact…
ytc_UgxowcXAh…
Comment
Dave's spitting a lot of facts here. Especially the pareto distribution comment. I too spend about 20% of the time writing code and 80% trying to figure out _how_ to write that code so it does what we expect it to do. I've been experimenting with CoPilot for help writing code: results have been mixed at best. Sometimes it comes up with a solution that simply doesn't do what it's supposed to do. This because documentation is often wrong/incomplete and that is what the model's been trained with. Sometimes it produces a valid solution but uses deprecated commands coz it's working off of historical solutions rather than current implementations. What it excels at is not programming as such, but finding information about a subject. I could see AI replacing the usual google search interface entirely. But so far, John Henry and his sledge hammer still prevail. It turns out the human mind is a lot harder to replace than you'd think...
youtube
AI Jobs
2024-01-22T20:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzfjT2cjcILsFCg_Dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx00X5I6lbBbVVJzIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw8edV2_awfMfnMUbh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgyiXYfYSERsz2Tvk0t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgzlkG2ffP6X0S7hNv54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugw4rpnhq3R6Zcaj2sJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwLEPAJA03Rz_hkNHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxdgdUPWo8D_qHD5jF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxJjYCs3HMDNkEqt1V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzhvxTO9KExVreBLUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]