Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If the AI feels that humans are bad and should be destroyed, then why would thei…
ytc_UgzJw85Fo…
G
What's the big deal, we've had an Anti-human Intelligence (AI) on this planet fo…
ytc_Ugzt166IN…
G
@Mrhellslayerz ...well today I am an AI artist because tech took my previous job…
ytr_Ugz5zD7MJ…
G
Robots don’t get tired robots don’t get attitudes robots don’t have marital prob…
ytc_UgzIlfnGn…
G
Hmm...am I mistaken that the bubble is in the form of investments and stock pric…
rdc_nc20otj
G
As an elder millennial, your comment about gen z being a cusp or liminal generat…
ytc_UgwMxKkb7…
G
Big tech will just use AI and AGI to do our jobs, make more money, and leave us …
ytc_UgyLqtCfK…
G
@10:30 - "This idea that we should view AI with a maternal instinct that AI is g…
ytc_UgxoQFSi8…
Comment
This is a brilliant deep dive by CNBC. What stands out most here is the shift in value proposition: AI Safety is no longer just a 'compliance box' to check—it’s now a core competitive advantage for Enterprise AI.
For businesses, the risk of hallucination or data leakage isn't just a technical glitch; it's a massive liability. Anthropic’s focus on 'steerability' and reliability aligns perfectly with what the B2B market actually needs: Predictability.
As the video mentions, we are moving into an era where 'National Security' and 'Economic Security' are indistinguishable. Whether you're a startup or a Fortune 500, integrating AI requires a 'Safety First' architecture. If we don't prioritize risk mitigation now, the cost of innovation might be too high later.
Great to see the industry maturing from 'move fast and break things' to 'move fast and build things that last.' 🛡️🤖📈
youtube
2026-01-10T19:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyMpCoI1Y1aVNm7ipx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyr-lmtChWEsqPY9HN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZHCTHxHTFP83W5ON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVR25_o2IbqgFva9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfVOXBBlHuFkK5JmJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxm83IUcir1T9Ml5YN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw9C-dJB53InME9JHB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvTa9Ld-MB2ccdFuB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzihWP2SqIi8eRT-9F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-nlFvomqNrUO9Udh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]