Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On the other hand, I was just listening to Hank Green saying AI is just a huge s…
ytc_UgxqWneRb…
G
And even tho AI it’s self warn against possible risk but like history has proven…
ytc_UgyKUHv42…
G
You mean the robot? Automating dog feeding isn't hard, you can buy automated ani…
ytr_UgwKVw0G_…
G
That was interesting! What AI(s) have you worked on? And do you think it's pos…
ytr_UgzouOxfV…
G
There are so many people talking about AI risks, but this is really the first on…
ytc_UgwWTs7Y6…
G
I believe we are focusing too much on the catastrophic "Skynet/Terminator" scena…
ytc_UgxG-RE0C…
G
I find this talk to be very much a avoidance of the issues of AI. Also very naiv…
ytc_Ugz1hfKZy…
G
If China is eating our lunch in AI. Then teach the kids computer programming as …
ytc_UgwapKmMx…
Comment
I can pretty well sum up my whole impression on the state of AI as this frontier fully appreciated by only one sort of creative individual; the financial ones. And there is nothing that a frontier disdains more than a boundary. All the data in the world that the most financially creative people have been constantly mining and exploring ways to capitalize on, and AI has arguably been the most expansive. It kind of begs the question, what real (pun intended) incentive is there to train AI in admittedly more ethical but costly ways - particularly as competition to engineer ever more robust models - will only keep getting fiercer?
And if I was to pose what I think should be a rapidly more pertinent question; amongst all the other considerations, as the demanding environmental impact of AI is only likely to compound with its prevalence, is AI really the frontier the world can tolerate?
youtube
2025-03-19T17:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxMLCYT5_5rlV9jl4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfimWrAbnNwl84E3V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxH29lo0OXZjVqL-AR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugy1c5lHA7zaNNolpF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx6DXZOc-CUdZYIV794AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw7UX09QceKDSJTr6t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwdEQqe-8eQk8To2x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxDdRbhvh4m6iMdcC54AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxY-zymWmzWZZZt-Dp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyxl1MR9bLzOnnbEil4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"})