Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Let me think, stunning short-term profitably with inadequate guardrails, or stea…
ytc_UgzF6hsrl…
G
On compassion, Tim Morton's notion of the 'strange stranger' (inspired in Levina…
rdc_dxghivu
G
You are the smart one to be wary. The problem is that this is happening without …
ytr_UgzVRHgne…
G
These doomsday positions on AI are building fences around nothing. I have a chan…
ytc_UgzmRYQof…
G
Right the mission is clear, get a time machine, get a Harley Davidson fat boy p…
ytc_UgwQv_MGx…
G
Imagine if those 700 engineers were actually using other AI tools to write their…
ytc_Ugy68OFn3…
G
As they continue to cut devs and work towards using AI, can’t trust words/mouths…
rdc_n9titq7
G
I was messing with ChatGPT to write some fan fiction. It was hysterically bad. B…
ytc_Ugxo0MupW…
Comment
No is the short answer. Current AI models are predictative large language models. They cannot transfer skills, they are not conscious. They cannot build more advanced AI. Humans do not even know what consciousness is. Let alone bottling it and selling it. No the danger in AI is automation. People start to rely on a system they don't understand. They don't understand the limits and therein is where the issues arise.
youtube
AI Governance
2025-08-03T14:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfWPyNCIwQcwGPPrh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8Ub9WXq2-nJySjQF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw59qExIGloJa5_PHx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkoknAJMQYeduAuol4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwnXdWMeIR_vpHKLUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzZBosOGHUOPot16ZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzcPonyMgmRvHRq05x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyCPgQetf4DodPzNbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz9d6EuDDEqQRJPj754AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPp1U4SYxliIcHOkp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]