Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can we ask AI how it does what it does? Rather that trying to figure it out?…
ytc_Ugw9tBSTa…
G
These companies don't seem to realize AI don't buy shit. No job + (no money × no…
ytc_UgxI_i6nE…
G
no matter how much fire power you got these robot dogs. who love to eat AI robot…
ytr_UgxKNCp3W…
G
I tried this with typing. Took hours got nowhere, because it seems like the AI i…
ytc_Ugy29c58c…
G
It is realllly weird he so confident is saying agi for 2027 based on the predict…
ytc_Ugz2YTx7P…
G
You guys are going to end up just having to adapt to the new paradigm.
Musicia…
ytc_Ugwvtymf0…
G
I usually focus on small parts of the code at a time and directly share it with …
ytr_Ugy3OaaT5…
G
I just ask DeepSeek autistic questions and it gives me autistic answers. It work…
ytc_Ugx3mDsfE…
Comment
While everyone keeps warning about the “dangers of AI”, here’s a quiet but firm reminder:
AI is not dangerous - not when you use it consciously, strategically, and without superstition.
I don’t fear AI. I don’t worship it either.
I work with it. Not as a master, not as a servant, but as a partner in language, logic, and vision.
Let others scream.
I build.
youtube
AI Governance
2025-06-22T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwPxHLONaGSg7-a03h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_wTw1m7lBLXUZ_KN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSXgQnK9iCSNb965l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7AxnPseOW9-W1-I14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxFSDgnt4gMYojh4Ad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz71a0tpWGJHst5ctl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyzCchHdglB8QZZs0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyout1PTGMVpfuOyMF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3R9UalWa-GZmC6el4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9pIoXpOKqB_Pf0rZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]