Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reality is out delusion is in. Avoid academia that has political or corporate in…
ytc_UgyukIFiE…
G
@Skumtomten1 I stuck through with comp sci being my major because I was close en…
ytr_Ugz7hJrhi…
G
As any human being in this world, I love Pixar/Ghibli animation. But it get bori…
ytc_UgxQJYCmM…
G
> People have the library of Alexandria at their fingertips
An LLM is not a…
rdc_o8sd8b6
G
You have not been working with AI for 40 years, LARPER. and robots and AI are i…
ytr_Ugxa_qHJi…
G
"The U.S. government must move “quickly and decisively” to avert substantial nat…
rdc_kvdli81
G
The most powerful people in the world racing to have the best AI none of us want…
ytc_UgzzuC1zJ…
G
@rivenoak It's not just about personal relationships, it's about huge unemployme…
ytr_UgxSLnOWw…
Comment
I have been a fan of TYT for a long time now and will always be one, but it is obvious that they need some one more knowledgeable about tech when they cover things like Artificial Intelligence. Robots taking our jobs wouldn't really be a problem long term because are economies would adapt over time in order to compensate and would be overall a net benefit especially since it also create new more technical industry which pays far more thus improving quality of life for many. And new businesses could be created as man power becomes less of a factor, not only that but Ai could help in optimizing society and economies.
Any one who says Ai is overall bad is likely to be a short minded individual or simply not informed well enough to merit an opinion. However i do agree weaponized General Ai is dangerous.
youtube
2015-07-30T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgjkgmxODtESCXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgjbpuyUEIR7OHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugj1Uwatnd1hWngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjEm9qE1zrMfHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UggRAIcWejPePHgCoAEC","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugjj5NuyX86BrHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjMAnTX7OoOd3gCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UghekMCcJDg0nngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugg0L8t7tYNYYHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjpK97zsewKUXgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}]