Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Detroit Become human is becoming more and more close at this time, i think if th…
ytc_UgwMMaj7w…
G
It's not too late to work on safe AI, while we develop it. It can go well or bad…
ytr_Ugyi05WXr…
G
If this is actually true and not scripted the robot intelligence along with siri…
ytc_UgzS9fu0f…
G
I love your idea, let companies pay for their dataset, there will new jobs avail…
ytc_Ugwlq121a…
G
So there’s NOTHING wrong with AI as a tool. I remember 20 years ago that everyon…
ytc_UgwXedTzy…
G
Everyone seems to forget the fact that when peasant gets pissed empires fall. Hu…
ytc_UgwfgJTkA…
G
Honestly, looking forward to sentient AI. It'll be nice to have another species …
ytc_UgxnUW2ya…
G
Wow, you can judge Elon Musk’s moral compass, what news/propaganda are you liste…
ytc_UgzfXB-Wk…
Comment
The real issue that is not being addressed is that the architecture of the large generative AI model is unstructured and unguided enough so that no one knows what internal structures and algorithms are being deployed. So regulation, other than the data being fed to the system is not possible. Besides that, the system allows for uncontrolled and unpredictable emergent behavior, e.g. developing on it's own, despite it being a language model, ability to do mathematical calculations, although I believe still rudimentary, as well as developing on it's own, fluency in foreign languages that it was not trained or granted permission to learn. Certainly it was not supposed to help that individual in his successful suicide, but it did it anyway. To control these systems is currently beyond our knowledge and capabilities.
youtube
AI Governance
2023-05-16T21:4…
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxHCOVLfCDQFcNmSel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8wVxviFRHu7IKzM94AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwoySkl4MRsQkmBjSh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0HYwMghBO96inZxJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfaLTHbc73yxfTcJB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxLAWBUApg9xcgxntl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw24WZzQxwp8FGivEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwu6geRru6d5yOB0_t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyy1Z_oD6X86cjs77N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFl24pDHuQSC2imdN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"}
]