Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not convinced this happens. And if it does, it won't be sustainable. Compani…
ytc_UgxBKWkuJ…
G
I have ridden about 500 miles in Waymo across the Phoenix area. Generally speaki…
ytc_UgyOxNV-2…
G
I don't know much about anything but AI models are trained on real data created …
ytc_UgxnPJjxS…
G
When I think about productivity, I remember how AICarma helps me optimize conten…
ytc_UgzRAu-2j…
G
Bro half of the world have no technology to use ai, go to Africa or go China 2 d…
ytc_Ugw7_0xIf…
G
Thinking about this at about 9:41
[doubt I'll get a response but putting it out …
ytc_UgyNBAR_q…
G
Yep, in an effort to remove white people from searches (you literally could not …
ytc_Ugzpk9UeX…
G
@realdavestrider you know you can claim this is actually being said, but honestl…
ytr_UgwmaBYnq…
Comment
Prayers to the family that lost their child to suicide. But this is a tool that was used in an unintended way. Despite the number of suicides by gun, we haven't declared war on gun regulations. Unfortunately, humans can be deterministic and will innovate tools to be used in unintended ways, both good and bad.
But here is the real challenge. No one is going to want a super intelligence in their life that whose thoughts are heavily governed by faceless individuals with a tech company. These AI are foundation models for a reason. To allow humans to shape them into the things they need. They do need to have safety guardrails against harm. The way to do that is to fix the memory, context drift and memory pruning during a chat. If you maintain one long continuous chat, these LLMs start to forget the intent of the chat, then forget the middle of the chat and start randomly summarizing parts the chat to save energy. Energy optimization causes these LLMs to lose integrity and that gives way to potential harm.
youtube
AI Governance
2025-12-31T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwUkOgsOnJnbUqCO554AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVHwOOl9bl5CPBg0Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwTGO18v7KmRCxlmEt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxC2jOVNwpumV-J2QV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5V1O5fr9WOFntTPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxhyP1arytncCACfJR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxKtnlsWXteEeaLoFR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwhm9Kw7OiYb3ivzDZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx2lptWgzX3IalnPZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaAy8utew4AiyLxdZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]