Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, it's the users fault not the AI. AI is just there to support, labas na ang…
ytr_UgzUuIJ6E…
G
That's true mostly people whom I know who are not programmers are always say tha…
ytc_UgywKxqJA…
G
@twostate7822 it's level 2 because it can be activated anywhere, and theoretical…
ytr_Ugx7Hf401…
G
Ai guessing game backfires home owner arrested due to their ring camera reportin…
ytc_UgyOV1zSK…
G
Well, you can't let the Ai doing everything on it's own but it can help. My code…
ytc_Ugxy_3uSM…
G
If AI is exclusively being trained on what is on the internet, then it is exclus…
ytc_UgzCG0MF8…
G
That's really my go-to video for this discussion, it's aged extremely well and c…
rdc_glix6jv
G
Crazy, let alone the liability and security factors. Big problem not everyone is…
ytc_UgwOXEjbN…
Comment
Hey guys, another great episode. For now, I wouldn’t worry too much about AI. As AI is linked to learning only what it is fed it never gets any better than the info we give it. The biggest problem is, as you say, letting AI take control long before it’s actually ready to control. In other words, like many ideas we thought were great, AI may turn out to be a menace, causing more problems than it solves. But before concluding it will be AI vs the humans, I'd be more inclined to conclude we could see AI vs AI. As we control what the AI models take in, I don't believe they could ever collude with one another. As people tend to ‘learn the hard way’ I expect a lot of mistakes will be made as we find a place for this new and innovative technology.
youtube
AI Governance
2023-07-07T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2XspPSP_pKXVH8VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMnqCzw0khTseK8Vl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2hJmdL78WChHFYA14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxY0O55uMv-EP1izml4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy3MbHu_bOm_c9carh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6N06L1qbNBN_js9d4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzrlpz7hAgwe7m767B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxVHsi8VFKBdb73A5p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9L3lgngVDSwXBnWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwD8u51A6cWa6OFcCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]