Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Russia's response to his arrest would be his "allies" seizing as much power and …
rdc_js0clms
G
From America. I work in education in Japan (no, not an English teacher). The kid…
ytc_UgyMkzbKd…
G
Current LLMs just predict the next most probable token. They seem to be intellig…
ytc_UgxAxoY7F…
G
At about 13:00, you start to touch on the real danger of AI. As computers become…
ytc_UgxeKzddp…
G
Sorry open ai model can't even count to 200. What are you talking about. This AI…
ytc_UgwLMkER-…
G
The people who made these things are sociopaths and I have not seen a single art…
ytc_Ugy112Ke6…
G
you put sm effort into making ur videos like finding certain ai generated emotio…
ytc_UgwwA4G4H…
G
Companies criticized are (YouTube, Google, Ubisoft, Roblox, TikTok, Meta, Disney…
ytc_UgzbL5boW…
Comment
I predict that as soon as we will see the negative side-effects of LLMs on a world-wide-scale, one nation after the other will start to implement limitations to the actual freedom that A.I.-agency might have gotten until then.
Every nation should install security measures to avoid possible negative consequences of A.I.-decision as of now. But more probably we will be forced to shut down disruptive LLMs or A.I.-agents / bots.
One thing that is always forgotten: We might not be able to provide the electricity needed for A.I.-agency to become dangerously powerful and "enbodied" in any hardware-system.
I also disagree with the concept of conscioussness described here. Neurologically we do not even know where and how we store our memories. Basically those are completely virtual and non-existant until we make the effort of remembering them, which is a very special form of biased mental projection. Every time we remember, we change our memories.
Here is a test that proves this: Try to remember a certain memory, before you remembered it. Where is it? Good luck trying.
We do not know where this memory is stored and about the sheer full phenomenology of human memory-processes. Bcause of this fact an actual A.I. does not have anything available resembling human memory-systems. It's basically mimikri.
We can produce "minds", those are everywhere. But consciousness is more than a mind or an "intelligence" and I don't mean it's very qualia.
youtube
AI Governance
2025-06-17T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz3d-s7IfN3vK_KzFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzpMIiqpP3Cmc1LTXF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7JuRaydqYjf1cQ4R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1Xaz-M0RjnGjaGeJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwqpI7ejCBfr2TTfCx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYTvhIQwTmDWsni594AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUweBofnfNTUpMHLV4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugznhjc2XtWrEvKOE-94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz1e29XYSdLhi5zUz94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7xK-mVXIHCj-cokx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]