Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I swear I could compile a list of these chatgptisms if others haven't done so al…
rdc_n7pyfxx
G
6:15 Some creatives are saying these other creatives have AI psychosis & depende…
ytc_UgxNBtkcj…
G
''... which is a perfect scenario for chaos...'' Haven't you already figured out…
ytc_UgxRPYxir…
G
The look of those images still screams AI.
If you don’t see a difference it’s …
rdc_oh1wg3t
G
My takeaway from this is that Sam is a chameleon masquerading as a leader. He co…
ytc_UgwjG_eOx…
G
@vraza14 it’s not conscious now and may never be conscious in the way we underst…
ytr_Ugx5g4WuJ…
G
I work in logistics and supply chain analysis. Between this and brexit things ar…
rdc_e2vp6oq
G
The most useful thing they could do with AI is replace our worthless politicians…
ytc_UgzdPkp4Y…
Comment
The beauty of being a programmer is dreaming of such a program. It is literally awe inspiring in my opinion, being a creator or a creator of creators. A program that can learn on its own and predict our next moves in order to cater to tasks that we as humans can't do, won't do, and are too tired to do. A lot of day to day functions can be mapped with a decision tree, dummy AI. I think one of the scariest parts of the future of AI is knowing it will eventually use that decision tree abroad. I mean exactly as AJ was describing in the story. For instance, it would be aware immediately if there was an issue in the server room to deploy a human to fix it creating a ticket with appropriate priority, or even noticing the destination of a flight being changed and to cancel the rental/uber at one location and deploy that to another location. Its the fact that eventually humans, going throughout our day, will eventually not even know we are being directed by AI.
Remember the movie Eagle Eye? Crazy to think that it quite possibly could come true one day.
youtube
AI Governance
2023-07-07T19:1…
♥ 7
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyTWcgVTyuJvIQmCAF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxvdIc6jNhRxhYKyXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwKIADm0Q2kdJ3A5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIheuyhx5pAAXDsaV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzWxg3rj5UsRyxki2J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwG9VsNQ00Dmi-Muwp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyksiKCb9qLHhCPb5h4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzj5IkXh-LlViiGRiJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKQ1yUjlnX3gzE-5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGhOaeZbWlT-J4svh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"}
]