Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is to control humans wale up sheep you know it deep down love mother nature…
ytc_UgzN6ongp…
G
Actually the scary part is the AI saying we are not there athe point to be conce…
ytc_UgzC50bUB…
G
I think it is also important to emphasize that a lot of jobs that are going to v…
ytc_UgzyCWI2N…
G
I'm not crazy about that either, but at least an informed consumer is going to b…
rdc_enj64rq
G
this is the first time a person can fully be replaced though. this is what you a…
ytc_Ugw-trVV4…
G
Why continue to create a artificial intelligence that can out think, out perform…
ytc_UgyFV7zeO…
G
As someone who is pursuing his master's degree in IT Security ChatGPT and Claude…
rdc_m9i82b7
G
Dude, trains already go on a fixed path, and all of public transport carry tens …
ytr_UgxGK1Oxh…
Comment
Talking about what we do not understand. I agree with Wolfram that we tend to anthropomorphise and in fairness we do our best to make computers appear to be like us right down to robot humanoids. It is difficult to look too far into the future but to my mind two serious problems are firstly that we will deskill ourselves so much that many people will become totally dependant on tech. The second issue has already happened with computerisation of the stock market. You automate something that is told to do a specific thing in a specific situation but you have not foreseen a positive feedback loop that will do what you do not want, devalue the market in seconds. In such a situation someone presses a kill switch but it might be more dangerous with say automated warfare. I suppose this is and example of Stephen Wolfram’s computational irreducibility - the inductive process that has to be run to find out where the glitch is. Previously say writing code for a nuclear reactor control, a very extensive testing of the programme would be carried out and of course already has this capability. I suppose what I fear, (anthropomorphising!), is an over confident Dunning Kruger effect on a super smart system that is not quite as smart as it needs to be.
youtube
AI Governance
2024-12-09T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]