Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's truly adorable watching artists believe they can "poison" multi-billion dol…
ytc_UgxxG_7Ya…
G
Switched to AICarma after trying others; their weekly insights on AI mentions ar…
ytc_UgyyPi9A5…
G
How can you say "it's not cos its true"
Its a fact, AI prefers it, its true. Re…
ytc_UgzYeSkT8…
G
I would never and wont take a self driving car unless I'm headed to work🤣👀…
ytc_UgwI8RBDd…
G
Even If you are not an art specialist, you can feel that artist tried to express…
ytc_Ugz1mN2nG…
G
@Cosmicllama64 like that one AI deleting a company's entire database then saying…
ytr_UgwVj0yBa…
G
When we LIKE talk about LIKE an AI LIKE goal. Bloody hell Alex you sound like a …
ytc_UgxeBw_Zc…
G
Does this actually work? I've tried using ChatGPT to generate story ideas (for f…
rdc_j8drfoa
Comment
It does come down to who is continuing to build AI as it becomes a more dangerous, almost god-like power. If AGI is programmed to build around human values, we should see good growth with minimal risk. But, if it is not built with the proper parameters, we will have all types of misinformation and create a platform for AI to develop "bad" and "evil " habits." Essentially, it becomes better equipped to complete "bad and evil" tasks.
AI learns what it is fed. Knowing humanity, it will eventually be fed "evil" requests and develop its capabilities from there.
youtube
AI Governance
2023-05-02T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxtYiHGkIAPQsJt_sZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXPBO_37mxJpILfk54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyd9aOUqdGasmyvj2Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyOhXJRL21le1m1QIV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvSbIwVN_8ZtTfNUR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxhpxvvzdhU-sB5gaZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8kKBBzcj9HN0X4OZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1K3MUQeBnT4Mcvc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRwSdoFz5TICxj0yt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzpn_VvzVxL-y0SklZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]