Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the UK 37.5 hours is the norm. It seems weird to do any more than that.…
rdc_dv0mkaa
G
Most AI learns from human interaction, does that no scare people more? The fact …
ytc_Ugzi4YgNY…
G
The main thing I've seen come out of AI generated art is instead if using images…
ytc_UgyLyTIdT…
G
Discussing the possibility of Robot Rights before even settling the question of …
ytc_UghDAQ9vz…
G
Why isn't AI replacing judges and lawyers? That would be way easier and safer t…
ytc_Ugw7R4F5z…
G
You didn't discover any darkside here. You didn't even test the limit of ChatGPT…
ytc_Ugzq7_qvF…
G
AI Bros can get pretty cringe but this feels like some artists just being a lil …
ytc_UgwQ9eHzl…
G
Sir! Thank you for spitting wisdom on how AI is going to fundamentally change t…
ytc_UgwTtHqSk…
Comment
Something that stands out for me is Humans are or have Nefarious tendencies (usually motivated by money or control). I feel like this is why we think Ai will try to get us, because we always think it terms of self defense against a threat. But why would Ai want to threaten us? Why wouldn't it understand us as the creator of it and value our presence and existence? Why wouldn't it see us as an invaluable entity and do everything in its power to protect and preserve us? It seems to me that Humans are afraid of Ai because they immediately go to a place of threat and fear. Now I am not suggesting it cannot be weaponized and used against us but in order for that to happen it would have to be told to do so by other Humans. That is not Ai doing it on its own free will as a result of an evolving algorithm.
youtube
AI Governance
2025-08-01T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzYZx7ixsQ2X93WI8V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxtlGXf-muv0DNsBcF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7gMsrk2tZQrb9c-B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYBrK9knQm3gcgJI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFVgqBVm4lusFAwxV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWZvAN6iTnPJEvdfF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwN6dP8DA2Y8xE68gJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZb4sgNcBd6SkmfU14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx7Cuszryod4mFZwiJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLTktxrROcJSEC_OB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}
]